00:00:00.001 Started by upstream project "autotest-per-patch" build number 130870 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.046 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:10.484 The recommended git tool is: git 00:00:10.485 using credential 00000000-0000-0000-0000-000000000002 00:00:10.487 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:10.499 Fetching changes from the remote Git repository 00:00:10.501 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:10.513 Using shallow fetch with depth 1 00:00:10.513 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:10.513 > git --version # timeout=10 00:00:10.524 > git --version # 'git version 2.39.2' 00:00:10.524 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:10.537 Setting http proxy: proxy-dmz.intel.com:911 00:00:10.537 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:14.190 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:14.206 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:14.221 Checking out Revision f95f9907808933a1db7196e15e13478e0f322ee7 (FETCH_HEAD) 00:00:14.221 > git config core.sparsecheckout # timeout=10 00:00:14.235 > git read-tree -mu HEAD # timeout=10 00:00:14.255 > git checkout -f f95f9907808933a1db7196e15e13478e0f322ee7 # timeout=5 00:00:14.277 Commit message: "Revert "autotest-phy: replace deprecated label for nvmf-cvl"" 00:00:14.277 > git rev-list --no-walk f95f9907808933a1db7196e15e13478e0f322ee7 # timeout=10 00:00:14.500 [Pipeline] Start of Pipeline 00:00:14.513 [Pipeline] library 00:00:14.514 Loading library shm_lib@master 00:00:14.515 Library shm_lib@master is cached. Copying from home. 00:00:14.530 [Pipeline] node 00:00:29.532 Still waiting to schedule task 00:00:29.532 Waiting for next available executor on ‘vagrant-vm-host’ 00:04:51.462 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:04:51.464 [Pipeline] { 00:04:51.475 [Pipeline] catchError 00:04:51.477 [Pipeline] { 00:04:51.493 [Pipeline] wrap 00:04:51.501 [Pipeline] { 00:04:51.508 [Pipeline] stage 00:04:51.510 [Pipeline] { (Prologue) 00:04:51.524 [Pipeline] echo 00:04:51.525 Node: VM-host-SM17 00:04:51.530 [Pipeline] cleanWs 00:04:51.540 [WS-CLEANUP] Deleting project workspace... 00:04:51.540 [WS-CLEANUP] Deferred wipeout is used... 00:04:51.546 [WS-CLEANUP] done 00:04:51.821 [Pipeline] setCustomBuildProperty 00:04:51.937 [Pipeline] httpRequest 00:04:52.520 [Pipeline] echo 00:04:52.522 Sorcerer 10.211.164.101 is alive 00:04:52.533 [Pipeline] retry 00:04:52.535 [Pipeline] { 00:04:52.550 [Pipeline] httpRequest 00:04:52.555 HttpMethod: GET 00:04:52.556 URL: http://10.211.164.101/packages/jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:04:52.556 Sending request to url: http://10.211.164.101/packages/jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:04:52.558 Response Code: HTTP/1.1 200 OK 00:04:52.559 Success: Status code 200 is in the accepted range: 200,404 00:04:52.559 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:04:53.045 [Pipeline] } 00:04:53.055 [Pipeline] // retry 00:04:53.060 [Pipeline] sh 00:04:53.335 + tar --no-same-owner -xf jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:04:53.353 [Pipeline] httpRequest 00:04:53.957 [Pipeline] echo 00:04:53.959 Sorcerer 10.211.164.101 is alive 00:04:53.970 [Pipeline] retry 00:04:53.972 [Pipeline] { 00:04:53.985 [Pipeline] httpRequest 00:04:53.989 HttpMethod: GET 00:04:53.990 URL: http://10.211.164.101/packages/spdk_2a4f56c54d9adf89f0a3da3351edcad2c0a1ed33.tar.gz 00:04:53.991 Sending request to url: http://10.211.164.101/packages/spdk_2a4f56c54d9adf89f0a3da3351edcad2c0a1ed33.tar.gz 00:04:54.001 Response Code: HTTP/1.1 200 OK 00:04:54.001 Success: Status code 200 is in the accepted range: 200,404 00:04:54.002 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk_2a4f56c54d9adf89f0a3da3351edcad2c0a1ed33.tar.gz 00:05:26.732 [Pipeline] } 00:05:26.747 [Pipeline] // retry 00:05:26.754 [Pipeline] sh 00:05:27.030 + tar --no-same-owner -xf spdk_2a4f56c54d9adf89f0a3da3351edcad2c0a1ed33.tar.gz 00:05:30.324 [Pipeline] sh 00:05:30.671 + git -C spdk log --oneline -n5 00:05:30.671 2a4f56c54 bdev/nvme: controller failover/multipath doc change 00:05:30.671 d16db39ee bdev/nvme: removed 'multipath' param from spdk_bdev_nvme_create() 00:05:30.671 32fb30b70 bdev/nvme: changed default config to multipath 00:05:30.671 397c5fc31 bdev/nvme: ctrl config consistency check 00:05:30.671 3950cd1bb bdev/nvme: Change spdk_bdev_reset() to succeed if at least one nvme_ctrlr is reconnected 00:05:30.689 [Pipeline] writeFile 00:05:30.704 [Pipeline] sh 00:05:30.982 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:30.994 [Pipeline] sh 00:05:31.273 + cat autorun-spdk.conf 00:05:31.273 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:31.273 SPDK_TEST_NVMF=1 00:05:31.273 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:31.273 SPDK_TEST_URING=1 00:05:31.273 SPDK_TEST_USDT=1 00:05:31.273 SPDK_RUN_UBSAN=1 00:05:31.273 NET_TYPE=virt 00:05:31.273 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:31.280 RUN_NIGHTLY=0 00:05:31.282 [Pipeline] } 00:05:31.295 [Pipeline] // stage 00:05:31.309 [Pipeline] stage 00:05:31.311 [Pipeline] { (Run VM) 00:05:31.324 [Pipeline] sh 00:05:31.604 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:31.604 + echo 'Start stage prepare_nvme.sh' 00:05:31.604 Start stage prepare_nvme.sh 00:05:31.604 + [[ -n 7 ]] 00:05:31.604 + disk_prefix=ex7 00:05:31.604 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 ]] 00:05:31.604 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf ]] 00:05:31.604 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf 00:05:31.604 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:31.604 ++ SPDK_TEST_NVMF=1 00:05:31.604 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:31.604 ++ SPDK_TEST_URING=1 00:05:31.604 ++ SPDK_TEST_USDT=1 00:05:31.604 ++ SPDK_RUN_UBSAN=1 00:05:31.604 ++ NET_TYPE=virt 00:05:31.604 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:31.604 ++ RUN_NIGHTLY=0 00:05:31.604 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:05:31.604 + nvme_files=() 00:05:31.604 + declare -A nvme_files 00:05:31.604 + backend_dir=/var/lib/libvirt/images/backends 00:05:31.604 + nvme_files['nvme.img']=5G 00:05:31.604 + nvme_files['nvme-cmb.img']=5G 00:05:31.604 + nvme_files['nvme-multi0.img']=4G 00:05:31.604 + nvme_files['nvme-multi1.img']=4G 00:05:31.604 + nvme_files['nvme-multi2.img']=4G 00:05:31.604 + nvme_files['nvme-openstack.img']=8G 00:05:31.604 + nvme_files['nvme-zns.img']=5G 00:05:31.604 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:31.604 + (( SPDK_TEST_FTL == 1 )) 00:05:31.604 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:31.604 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:31.604 + for nvme in "${!nvme_files[@]}" 00:05:31.604 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:05:31.604 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:31.604 + for nvme in "${!nvme_files[@]}" 00:05:31.604 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:05:31.604 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:31.604 + for nvme in "${!nvme_files[@]}" 00:05:31.604 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:05:31.604 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:31.604 + for nvme in "${!nvme_files[@]}" 00:05:31.604 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:05:31.604 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:31.604 + for nvme in "${!nvme_files[@]}" 00:05:31.604 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:05:31.604 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:31.604 + for nvme in "${!nvme_files[@]}" 00:05:31.604 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:05:31.604 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:31.604 + for nvme in "${!nvme_files[@]}" 00:05:31.604 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:05:32.169 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:32.169 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:05:32.169 + echo 'End stage prepare_nvme.sh' 00:05:32.169 End stage prepare_nvme.sh 00:05:32.179 [Pipeline] sh 00:05:32.457 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:32.457 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:05:32.457 00:05:32.457 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/scripts/vagrant 00:05:32.457 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk 00:05:32.457 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:05:32.457 HELP=0 00:05:32.457 DRY_RUN=0 00:05:32.457 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:05:32.457 NVME_DISKS_TYPE=nvme,nvme, 00:05:32.457 NVME_AUTO_CREATE=0 00:05:32.457 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:05:32.457 NVME_CMB=,, 00:05:32.457 NVME_PMR=,, 00:05:32.457 NVME_ZNS=,, 00:05:32.457 NVME_MS=,, 00:05:32.457 NVME_FDP=,, 00:05:32.457 SPDK_VAGRANT_DISTRO=fedora39 00:05:32.457 SPDK_VAGRANT_VMCPU=10 00:05:32.457 SPDK_VAGRANT_VMRAM=12288 00:05:32.457 SPDK_VAGRANT_PROVIDER=libvirt 00:05:32.457 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:05:32.457 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:32.457 SPDK_OPENSTACK_NETWORK=0 00:05:32.457 VAGRANT_PACKAGE_BOX=0 00:05:32.457 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:05:32.457 FORCE_DISTRO=true 00:05:32.457 VAGRANT_BOX_VERSION= 00:05:32.457 EXTRA_VAGRANTFILES= 00:05:32.457 NIC_MODEL=e1000 00:05:32.457 00:05:32.457 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt' 00:05:32.457 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:05:35.744 Bringing machine 'default' up with 'libvirt' provider... 00:05:36.310 ==> default: Creating image (snapshot of base box volume). 00:05:36.310 ==> default: Creating domain with the following settings... 00:05:36.310 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728299791_d59d575a18436675ad94 00:05:36.310 ==> default: -- Domain type: kvm 00:05:36.310 ==> default: -- Cpus: 10 00:05:36.310 ==> default: -- Feature: acpi 00:05:36.310 ==> default: -- Feature: apic 00:05:36.310 ==> default: -- Feature: pae 00:05:36.310 ==> default: -- Memory: 12288M 00:05:36.310 ==> default: -- Memory Backing: hugepages: 00:05:36.310 ==> default: -- Management MAC: 00:05:36.310 ==> default: -- Loader: 00:05:36.310 ==> default: -- Nvram: 00:05:36.310 ==> default: -- Base box: spdk/fedora39 00:05:36.310 ==> default: -- Storage pool: default 00:05:36.310 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728299791_d59d575a18436675ad94.img (20G) 00:05:36.310 ==> default: -- Volume Cache: default 00:05:36.310 ==> default: -- Kernel: 00:05:36.310 ==> default: -- Initrd: 00:05:36.310 ==> default: -- Graphics Type: vnc 00:05:36.310 ==> default: -- Graphics Port: -1 00:05:36.310 ==> default: -- Graphics IP: 127.0.0.1 00:05:36.310 ==> default: -- Graphics Password: Not defined 00:05:36.310 ==> default: -- Video Type: cirrus 00:05:36.310 ==> default: -- Video VRAM: 9216 00:05:36.310 ==> default: -- Sound Type: 00:05:36.310 ==> default: -- Keymap: en-us 00:05:36.310 ==> default: -- TPM Path: 00:05:36.310 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:36.310 ==> default: -- Command line args: 00:05:36.310 ==> default: -> value=-device, 00:05:36.310 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:36.310 ==> default: -> value=-drive, 00:05:36.310 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:05:36.310 ==> default: -> value=-device, 00:05:36.310 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:36.310 ==> default: -> value=-device, 00:05:36.310 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:36.310 ==> default: -> value=-drive, 00:05:36.310 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:05:36.310 ==> default: -> value=-device, 00:05:36.310 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:36.310 ==> default: -> value=-drive, 00:05:36.310 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:05:36.310 ==> default: -> value=-device, 00:05:36.310 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:36.310 ==> default: -> value=-drive, 00:05:36.310 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:05:36.310 ==> default: -> value=-device, 00:05:36.310 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:36.567 ==> default: Creating shared folders metadata... 00:05:36.567 ==> default: Starting domain. 00:05:37.944 ==> default: Waiting for domain to get an IP address... 00:05:56.065 ==> default: Waiting for SSH to become available... 00:05:56.065 ==> default: Configuring and enabling network interfaces... 00:05:58.007 default: SSH address: 192.168.121.76:22 00:05:58.007 default: SSH username: vagrant 00:05:58.007 default: SSH auth method: private key 00:06:00.536 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:08.646 ==> default: Mounting SSHFS shared folder... 00:06:09.579 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:09.579 ==> default: Checking Mount.. 00:06:10.954 ==> default: Folder Successfully Mounted! 00:06:10.954 ==> default: Running provisioner: file... 00:06:11.520 default: ~/.gitconfig => .gitconfig 00:06:12.086 00:06:12.086 SUCCESS! 00:06:12.086 00:06:12.086 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:06:12.086 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:12.086 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:06:12.086 00:06:12.093 [Pipeline] } 00:06:12.108 [Pipeline] // stage 00:06:12.118 [Pipeline] dir 00:06:12.118 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt 00:06:12.120 [Pipeline] { 00:06:12.132 [Pipeline] catchError 00:06:12.134 [Pipeline] { 00:06:12.146 [Pipeline] sh 00:06:12.422 + vagrant ssh-config --host vagrant 00:06:12.422 + sed -ne /^Host/,$p 00:06:12.422 + tee ssh_conf 00:06:16.604 Host vagrant 00:06:16.604 HostName 192.168.121.76 00:06:16.604 User vagrant 00:06:16.604 Port 22 00:06:16.604 UserKnownHostsFile /dev/null 00:06:16.604 StrictHostKeyChecking no 00:06:16.605 PasswordAuthentication no 00:06:16.605 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:16.605 IdentitiesOnly yes 00:06:16.605 LogLevel FATAL 00:06:16.605 ForwardAgent yes 00:06:16.605 ForwardX11 yes 00:06:16.605 00:06:16.618 [Pipeline] withEnv 00:06:16.620 [Pipeline] { 00:06:16.633 [Pipeline] sh 00:06:16.911 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:16.911 source /etc/os-release 00:06:16.911 [[ -e /image.version ]] && img=$(< /image.version) 00:06:16.911 # Minimal, systemd-like check. 00:06:16.911 if [[ -e /.dockerenv ]]; then 00:06:16.911 # Clear garbage from the node's name: 00:06:16.911 # agt-er_autotest_547-896 -> autotest_547-896 00:06:16.911 # $HOSTNAME is the actual container id 00:06:16.911 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:16.911 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:16.911 # We can assume this is a mount from a host where container is running, 00:06:16.911 # so fetch its hostname to easily identify the target swarm worker. 00:06:16.911 container="$(< /etc/hostname) ($agent)" 00:06:16.911 else 00:06:16.911 # Fallback 00:06:16.911 container=$agent 00:06:16.911 fi 00:06:16.911 fi 00:06:16.911 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:16.912 00:06:16.921 [Pipeline] } 00:06:16.938 [Pipeline] // withEnv 00:06:16.945 [Pipeline] setCustomBuildProperty 00:06:16.959 [Pipeline] stage 00:06:16.961 [Pipeline] { (Tests) 00:06:16.978 [Pipeline] sh 00:06:17.256 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:17.269 [Pipeline] sh 00:06:17.547 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:17.560 [Pipeline] timeout 00:06:17.560 Timeout set to expire in 1 hr 0 min 00:06:17.562 [Pipeline] { 00:06:17.575 [Pipeline] sh 00:06:17.852 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:18.418 HEAD is now at 2a4f56c54 bdev/nvme: controller failover/multipath doc change 00:06:18.430 [Pipeline] sh 00:06:18.709 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:18.980 [Pipeline] sh 00:06:19.258 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:19.532 [Pipeline] sh 00:06:19.809 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:06:20.067 ++ readlink -f spdk_repo 00:06:20.067 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:20.067 + [[ -n /home/vagrant/spdk_repo ]] 00:06:20.067 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:20.067 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:20.067 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:20.067 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:20.067 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:20.067 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:06:20.067 + cd /home/vagrant/spdk_repo 00:06:20.067 + source /etc/os-release 00:06:20.067 ++ NAME='Fedora Linux' 00:06:20.067 ++ VERSION='39 (Cloud Edition)' 00:06:20.067 ++ ID=fedora 00:06:20.067 ++ VERSION_ID=39 00:06:20.067 ++ VERSION_CODENAME= 00:06:20.067 ++ PLATFORM_ID=platform:f39 00:06:20.067 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:20.067 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:20.067 ++ LOGO=fedora-logo-icon 00:06:20.067 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:20.067 ++ HOME_URL=https://fedoraproject.org/ 00:06:20.067 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:20.067 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:20.067 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:20.067 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:20.067 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:20.067 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:20.067 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:20.067 ++ SUPPORT_END=2024-11-12 00:06:20.067 ++ VARIANT='Cloud Edition' 00:06:20.067 ++ VARIANT_ID=cloud 00:06:20.067 + uname -a 00:06:20.067 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:20.067 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:20.326 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:20.326 Hugepages 00:06:20.326 node hugesize free / total 00:06:20.326 node0 1048576kB 0 / 0 00:06:20.326 node0 2048kB 0 / 0 00:06:20.326 00:06:20.326 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:20.584 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:20.584 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:20.584 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:20.584 + rm -f /tmp/spdk-ld-path 00:06:20.585 + source autorun-spdk.conf 00:06:20.585 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:20.585 ++ SPDK_TEST_NVMF=1 00:06:20.585 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:20.585 ++ SPDK_TEST_URING=1 00:06:20.585 ++ SPDK_TEST_USDT=1 00:06:20.585 ++ SPDK_RUN_UBSAN=1 00:06:20.585 ++ NET_TYPE=virt 00:06:20.585 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:20.585 ++ RUN_NIGHTLY=0 00:06:20.585 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:20.585 + [[ -n '' ]] 00:06:20.585 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:20.585 + for M in /var/spdk/build-*-manifest.txt 00:06:20.585 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:20.585 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:20.585 + for M in /var/spdk/build-*-manifest.txt 00:06:20.585 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:20.585 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:20.585 + for M in /var/spdk/build-*-manifest.txt 00:06:20.585 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:20.585 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:20.585 ++ uname 00:06:20.585 + [[ Linux == \L\i\n\u\x ]] 00:06:20.585 + sudo dmesg -T 00:06:20.585 + sudo dmesg --clear 00:06:20.585 + dmesg_pid=5210 00:06:20.585 + sudo dmesg -Tw 00:06:20.585 + [[ Fedora Linux == FreeBSD ]] 00:06:20.585 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:20.585 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:20.585 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:20.585 + [[ -x /usr/src/fio-static/fio ]] 00:06:20.585 + export FIO_BIN=/usr/src/fio-static/fio 00:06:20.585 + FIO_BIN=/usr/src/fio-static/fio 00:06:20.585 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:20.585 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:20.585 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:20.585 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:20.585 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:20.585 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:20.585 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:20.585 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:20.585 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:20.585 Test configuration: 00:06:20.585 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:20.585 SPDK_TEST_NVMF=1 00:06:20.585 SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:20.585 SPDK_TEST_URING=1 00:06:20.585 SPDK_TEST_USDT=1 00:06:20.585 SPDK_RUN_UBSAN=1 00:06:20.585 NET_TYPE=virt 00:06:20.585 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:20.842 RUN_NIGHTLY=0 11:17:16 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:06:20.842 11:17:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:20.842 11:17:16 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:20.842 11:17:16 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:20.842 11:17:16 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.842 11:17:16 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.842 11:17:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.842 11:17:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.843 11:17:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.843 11:17:16 -- paths/export.sh@5 -- $ export PATH 00:06:20.843 11:17:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.843 11:17:16 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:20.843 11:17:16 -- common/autobuild_common.sh@486 -- $ date +%s 00:06:20.843 11:17:16 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728299836.XXXXXX 00:06:20.843 11:17:16 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728299836.yhWF1e 00:06:20.843 11:17:16 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:06:20.843 11:17:16 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:06:20.843 11:17:16 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:20.843 11:17:16 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:20.843 11:17:16 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:20.843 11:17:16 -- common/autobuild_common.sh@502 -- $ get_config_params 00:06:20.843 11:17:16 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:06:20.843 11:17:16 -- common/autotest_common.sh@10 -- $ set +x 00:06:20.843 11:17:16 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:06:20.843 11:17:16 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:06:20.843 11:17:16 -- pm/common@17 -- $ local monitor 00:06:20.843 11:17:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:20.843 11:17:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:20.843 11:17:16 -- pm/common@25 -- $ sleep 1 00:06:20.843 11:17:16 -- pm/common@21 -- $ date +%s 00:06:20.843 11:17:16 -- pm/common@21 -- $ date +%s 00:06:20.843 11:17:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728299836 00:06:20.843 11:17:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728299836 00:06:20.843 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728299836_collect-vmstat.pm.log 00:06:20.843 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728299836_collect-cpu-load.pm.log 00:06:21.778 11:17:17 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:06:21.778 11:17:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:21.778 11:17:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:21.778 11:17:17 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:21.778 11:17:17 -- spdk/autobuild.sh@16 -- $ date -u 00:06:21.778 Mon Oct 7 11:17:17 AM UTC 2024 00:06:21.778 11:17:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:21.778 v25.01-pre-39-g2a4f56c54 00:06:21.778 11:17:17 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:06:21.778 11:17:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:21.778 11:17:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:21.778 11:17:17 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:06:21.778 11:17:17 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:06:21.778 11:17:17 -- common/autotest_common.sh@10 -- $ set +x 00:06:21.778 ************************************ 00:06:21.778 START TEST ubsan 00:06:21.778 ************************************ 00:06:21.778 using ubsan 00:06:21.778 11:17:17 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:06:21.778 00:06:21.778 real 0m0.000s 00:06:21.778 user 0m0.000s 00:06:21.778 sys 0m0.000s 00:06:21.778 11:17:17 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:21.778 ************************************ 00:06:21.778 END TEST ubsan 00:06:21.778 ************************************ 00:06:21.778 11:17:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:21.778 11:17:17 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:21.778 11:17:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:21.778 11:17:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:21.778 11:17:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:21.778 11:17:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:21.778 11:17:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:21.778 11:17:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:21.778 11:17:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:21.778 11:17:17 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:06:22.036 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:22.036 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:22.294 Using 'verbs' RDMA provider 00:06:35.425 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:47.627 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:47.886 Creating mk/config.mk...done. 00:06:47.886 Creating mk/cc.flags.mk...done. 00:06:47.886 Type 'make' to build. 00:06:47.886 11:17:43 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:47.886 11:17:43 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:06:47.886 11:17:43 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:06:47.886 11:17:43 -- common/autotest_common.sh@10 -- $ set +x 00:06:47.886 ************************************ 00:06:47.886 START TEST make 00:06:47.886 ************************************ 00:06:47.886 11:17:43 make -- common/autotest_common.sh@1125 -- $ make -j10 00:06:48.452 make[1]: Nothing to be done for 'all'. 00:07:00.660 The Meson build system 00:07:00.660 Version: 1.5.0 00:07:00.660 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:07:00.660 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:07:00.660 Build type: native build 00:07:00.660 Program cat found: YES (/usr/bin/cat) 00:07:00.660 Project name: DPDK 00:07:00.660 Project version: 24.03.0 00:07:00.660 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:00.660 C linker for the host machine: cc ld.bfd 2.40-14 00:07:00.660 Host machine cpu family: x86_64 00:07:00.660 Host machine cpu: x86_64 00:07:00.660 Message: ## Building in Developer Mode ## 00:07:00.660 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:00.661 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:07:00.661 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:00.661 Program python3 found: YES (/usr/bin/python3) 00:07:00.661 Program cat found: YES (/usr/bin/cat) 00:07:00.661 Compiler for C supports arguments -march=native: YES 00:07:00.661 Checking for size of "void *" : 8 00:07:00.661 Checking for size of "void *" : 8 (cached) 00:07:00.661 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:00.661 Library m found: YES 00:07:00.661 Library numa found: YES 00:07:00.661 Has header "numaif.h" : YES 00:07:00.661 Library fdt found: NO 00:07:00.661 Library execinfo found: NO 00:07:00.661 Has header "execinfo.h" : YES 00:07:00.661 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:00.661 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:00.661 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:00.661 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:00.661 Run-time dependency openssl found: YES 3.1.1 00:07:00.661 Run-time dependency libpcap found: YES 1.10.4 00:07:00.661 Has header "pcap.h" with dependency libpcap: YES 00:07:00.661 Compiler for C supports arguments -Wcast-qual: YES 00:07:00.661 Compiler for C supports arguments -Wdeprecated: YES 00:07:00.661 Compiler for C supports arguments -Wformat: YES 00:07:00.661 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:00.661 Compiler for C supports arguments -Wformat-security: NO 00:07:00.661 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:00.661 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:00.661 Compiler for C supports arguments -Wnested-externs: YES 00:07:00.661 Compiler for C supports arguments -Wold-style-definition: YES 00:07:00.661 Compiler for C supports arguments -Wpointer-arith: YES 00:07:00.661 Compiler for C supports arguments -Wsign-compare: YES 00:07:00.661 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:00.661 Compiler for C supports arguments -Wundef: YES 00:07:00.661 Compiler for C supports arguments -Wwrite-strings: YES 00:07:00.661 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:00.661 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:00.661 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:00.661 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:00.661 Program objdump found: YES (/usr/bin/objdump) 00:07:00.661 Compiler for C supports arguments -mavx512f: YES 00:07:00.661 Checking if "AVX512 checking" compiles: YES 00:07:00.661 Fetching value of define "__SSE4_2__" : 1 00:07:00.661 Fetching value of define "__AES__" : 1 00:07:00.661 Fetching value of define "__AVX__" : 1 00:07:00.661 Fetching value of define "__AVX2__" : 1 00:07:00.661 Fetching value of define "__AVX512BW__" : (undefined) 00:07:00.661 Fetching value of define "__AVX512CD__" : (undefined) 00:07:00.661 Fetching value of define "__AVX512DQ__" : (undefined) 00:07:00.661 Fetching value of define "__AVX512F__" : (undefined) 00:07:00.661 Fetching value of define "__AVX512VL__" : (undefined) 00:07:00.661 Fetching value of define "__PCLMUL__" : 1 00:07:00.661 Fetching value of define "__RDRND__" : 1 00:07:00.661 Fetching value of define "__RDSEED__" : 1 00:07:00.661 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:00.661 Fetching value of define "__znver1__" : (undefined) 00:07:00.661 Fetching value of define "__znver2__" : (undefined) 00:07:00.661 Fetching value of define "__znver3__" : (undefined) 00:07:00.661 Fetching value of define "__znver4__" : (undefined) 00:07:00.661 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:00.661 Message: lib/log: Defining dependency "log" 00:07:00.661 Message: lib/kvargs: Defining dependency "kvargs" 00:07:00.661 Message: lib/telemetry: Defining dependency "telemetry" 00:07:00.661 Checking for function "getentropy" : NO 00:07:00.661 Message: lib/eal: Defining dependency "eal" 00:07:00.661 Message: lib/ring: Defining dependency "ring" 00:07:00.661 Message: lib/rcu: Defining dependency "rcu" 00:07:00.661 Message: lib/mempool: Defining dependency "mempool" 00:07:00.661 Message: lib/mbuf: Defining dependency "mbuf" 00:07:00.661 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:00.661 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:07:00.661 Compiler for C supports arguments -mpclmul: YES 00:07:00.661 Compiler for C supports arguments -maes: YES 00:07:00.661 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:00.661 Compiler for C supports arguments -mavx512bw: YES 00:07:00.661 Compiler for C supports arguments -mavx512dq: YES 00:07:00.661 Compiler for C supports arguments -mavx512vl: YES 00:07:00.661 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:00.661 Compiler for C supports arguments -mavx2: YES 00:07:00.661 Compiler for C supports arguments -mavx: YES 00:07:00.661 Message: lib/net: Defining dependency "net" 00:07:00.661 Message: lib/meter: Defining dependency "meter" 00:07:00.661 Message: lib/ethdev: Defining dependency "ethdev" 00:07:00.661 Message: lib/pci: Defining dependency "pci" 00:07:00.661 Message: lib/cmdline: Defining dependency "cmdline" 00:07:00.661 Message: lib/hash: Defining dependency "hash" 00:07:00.661 Message: lib/timer: Defining dependency "timer" 00:07:00.661 Message: lib/compressdev: Defining dependency "compressdev" 00:07:00.661 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:00.661 Message: lib/dmadev: Defining dependency "dmadev" 00:07:00.661 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:00.661 Message: lib/power: Defining dependency "power" 00:07:00.661 Message: lib/reorder: Defining dependency "reorder" 00:07:00.661 Message: lib/security: Defining dependency "security" 00:07:00.661 Has header "linux/userfaultfd.h" : YES 00:07:00.661 Has header "linux/vduse.h" : YES 00:07:00.661 Message: lib/vhost: Defining dependency "vhost" 00:07:00.661 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:00.661 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:00.661 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:00.661 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:00.661 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:00.661 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:00.661 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:00.661 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:00.661 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:00.661 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:00.661 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:00.661 Configuring doxy-api-html.conf using configuration 00:07:00.661 Configuring doxy-api-man.conf using configuration 00:07:00.661 Program mandb found: YES (/usr/bin/mandb) 00:07:00.661 Program sphinx-build found: NO 00:07:00.661 Configuring rte_build_config.h using configuration 00:07:00.661 Message: 00:07:00.661 ================= 00:07:00.661 Applications Enabled 00:07:00.661 ================= 00:07:00.661 00:07:00.661 apps: 00:07:00.661 00:07:00.661 00:07:00.661 Message: 00:07:00.661 ================= 00:07:00.661 Libraries Enabled 00:07:00.661 ================= 00:07:00.661 00:07:00.661 libs: 00:07:00.661 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:00.661 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:00.661 cryptodev, dmadev, power, reorder, security, vhost, 00:07:00.661 00:07:00.661 Message: 00:07:00.661 =============== 00:07:00.661 Drivers Enabled 00:07:00.661 =============== 00:07:00.661 00:07:00.661 common: 00:07:00.661 00:07:00.661 bus: 00:07:00.661 pci, vdev, 00:07:00.661 mempool: 00:07:00.661 ring, 00:07:00.661 dma: 00:07:00.661 00:07:00.661 net: 00:07:00.661 00:07:00.661 crypto: 00:07:00.661 00:07:00.661 compress: 00:07:00.661 00:07:00.661 vdpa: 00:07:00.661 00:07:00.661 00:07:00.661 Message: 00:07:00.661 ================= 00:07:00.661 Content Skipped 00:07:00.661 ================= 00:07:00.661 00:07:00.661 apps: 00:07:00.661 dumpcap: explicitly disabled via build config 00:07:00.661 graph: explicitly disabled via build config 00:07:00.661 pdump: explicitly disabled via build config 00:07:00.661 proc-info: explicitly disabled via build config 00:07:00.661 test-acl: explicitly disabled via build config 00:07:00.661 test-bbdev: explicitly disabled via build config 00:07:00.661 test-cmdline: explicitly disabled via build config 00:07:00.661 test-compress-perf: explicitly disabled via build config 00:07:00.661 test-crypto-perf: explicitly disabled via build config 00:07:00.661 test-dma-perf: explicitly disabled via build config 00:07:00.661 test-eventdev: explicitly disabled via build config 00:07:00.661 test-fib: explicitly disabled via build config 00:07:00.661 test-flow-perf: explicitly disabled via build config 00:07:00.661 test-gpudev: explicitly disabled via build config 00:07:00.661 test-mldev: explicitly disabled via build config 00:07:00.661 test-pipeline: explicitly disabled via build config 00:07:00.661 test-pmd: explicitly disabled via build config 00:07:00.661 test-regex: explicitly disabled via build config 00:07:00.661 test-sad: explicitly disabled via build config 00:07:00.661 test-security-perf: explicitly disabled via build config 00:07:00.661 00:07:00.661 libs: 00:07:00.661 argparse: explicitly disabled via build config 00:07:00.661 metrics: explicitly disabled via build config 00:07:00.661 acl: explicitly disabled via build config 00:07:00.661 bbdev: explicitly disabled via build config 00:07:00.661 bitratestats: explicitly disabled via build config 00:07:00.661 bpf: explicitly disabled via build config 00:07:00.662 cfgfile: explicitly disabled via build config 00:07:00.662 distributor: explicitly disabled via build config 00:07:00.662 efd: explicitly disabled via build config 00:07:00.662 eventdev: explicitly disabled via build config 00:07:00.662 dispatcher: explicitly disabled via build config 00:07:00.662 gpudev: explicitly disabled via build config 00:07:00.662 gro: explicitly disabled via build config 00:07:00.662 gso: explicitly disabled via build config 00:07:00.662 ip_frag: explicitly disabled via build config 00:07:00.662 jobstats: explicitly disabled via build config 00:07:00.662 latencystats: explicitly disabled via build config 00:07:00.662 lpm: explicitly disabled via build config 00:07:00.662 member: explicitly disabled via build config 00:07:00.662 pcapng: explicitly disabled via build config 00:07:00.662 rawdev: explicitly disabled via build config 00:07:00.662 regexdev: explicitly disabled via build config 00:07:00.662 mldev: explicitly disabled via build config 00:07:00.662 rib: explicitly disabled via build config 00:07:00.662 sched: explicitly disabled via build config 00:07:00.662 stack: explicitly disabled via build config 00:07:00.662 ipsec: explicitly disabled via build config 00:07:00.662 pdcp: explicitly disabled via build config 00:07:00.662 fib: explicitly disabled via build config 00:07:00.662 port: explicitly disabled via build config 00:07:00.662 pdump: explicitly disabled via build config 00:07:00.662 table: explicitly disabled via build config 00:07:00.662 pipeline: explicitly disabled via build config 00:07:00.662 graph: explicitly disabled via build config 00:07:00.662 node: explicitly disabled via build config 00:07:00.662 00:07:00.662 drivers: 00:07:00.662 common/cpt: not in enabled drivers build config 00:07:00.662 common/dpaax: not in enabled drivers build config 00:07:00.662 common/iavf: not in enabled drivers build config 00:07:00.662 common/idpf: not in enabled drivers build config 00:07:00.662 common/ionic: not in enabled drivers build config 00:07:00.662 common/mvep: not in enabled drivers build config 00:07:00.662 common/octeontx: not in enabled drivers build config 00:07:00.662 bus/auxiliary: not in enabled drivers build config 00:07:00.662 bus/cdx: not in enabled drivers build config 00:07:00.662 bus/dpaa: not in enabled drivers build config 00:07:00.662 bus/fslmc: not in enabled drivers build config 00:07:00.662 bus/ifpga: not in enabled drivers build config 00:07:00.662 bus/platform: not in enabled drivers build config 00:07:00.662 bus/uacce: not in enabled drivers build config 00:07:00.662 bus/vmbus: not in enabled drivers build config 00:07:00.662 common/cnxk: not in enabled drivers build config 00:07:00.662 common/mlx5: not in enabled drivers build config 00:07:00.662 common/nfp: not in enabled drivers build config 00:07:00.662 common/nitrox: not in enabled drivers build config 00:07:00.662 common/qat: not in enabled drivers build config 00:07:00.662 common/sfc_efx: not in enabled drivers build config 00:07:00.662 mempool/bucket: not in enabled drivers build config 00:07:00.662 mempool/cnxk: not in enabled drivers build config 00:07:00.662 mempool/dpaa: not in enabled drivers build config 00:07:00.662 mempool/dpaa2: not in enabled drivers build config 00:07:00.662 mempool/octeontx: not in enabled drivers build config 00:07:00.662 mempool/stack: not in enabled drivers build config 00:07:00.662 dma/cnxk: not in enabled drivers build config 00:07:00.662 dma/dpaa: not in enabled drivers build config 00:07:00.662 dma/dpaa2: not in enabled drivers build config 00:07:00.662 dma/hisilicon: not in enabled drivers build config 00:07:00.662 dma/idxd: not in enabled drivers build config 00:07:00.662 dma/ioat: not in enabled drivers build config 00:07:00.662 dma/skeleton: not in enabled drivers build config 00:07:00.662 net/af_packet: not in enabled drivers build config 00:07:00.662 net/af_xdp: not in enabled drivers build config 00:07:00.662 net/ark: not in enabled drivers build config 00:07:00.662 net/atlantic: not in enabled drivers build config 00:07:00.662 net/avp: not in enabled drivers build config 00:07:00.662 net/axgbe: not in enabled drivers build config 00:07:00.662 net/bnx2x: not in enabled drivers build config 00:07:00.662 net/bnxt: not in enabled drivers build config 00:07:00.662 net/bonding: not in enabled drivers build config 00:07:00.662 net/cnxk: not in enabled drivers build config 00:07:00.662 net/cpfl: not in enabled drivers build config 00:07:00.662 net/cxgbe: not in enabled drivers build config 00:07:00.662 net/dpaa: not in enabled drivers build config 00:07:00.662 net/dpaa2: not in enabled drivers build config 00:07:00.662 net/e1000: not in enabled drivers build config 00:07:00.662 net/ena: not in enabled drivers build config 00:07:00.662 net/enetc: not in enabled drivers build config 00:07:00.662 net/enetfec: not in enabled drivers build config 00:07:00.662 net/enic: not in enabled drivers build config 00:07:00.662 net/failsafe: not in enabled drivers build config 00:07:00.662 net/fm10k: not in enabled drivers build config 00:07:00.662 net/gve: not in enabled drivers build config 00:07:00.662 net/hinic: not in enabled drivers build config 00:07:00.662 net/hns3: not in enabled drivers build config 00:07:00.662 net/i40e: not in enabled drivers build config 00:07:00.662 net/iavf: not in enabled drivers build config 00:07:00.662 net/ice: not in enabled drivers build config 00:07:00.662 net/idpf: not in enabled drivers build config 00:07:00.662 net/igc: not in enabled drivers build config 00:07:00.662 net/ionic: not in enabled drivers build config 00:07:00.662 net/ipn3ke: not in enabled drivers build config 00:07:00.662 net/ixgbe: not in enabled drivers build config 00:07:00.662 net/mana: not in enabled drivers build config 00:07:00.662 net/memif: not in enabled drivers build config 00:07:00.662 net/mlx4: not in enabled drivers build config 00:07:00.662 net/mlx5: not in enabled drivers build config 00:07:00.662 net/mvneta: not in enabled drivers build config 00:07:00.662 net/mvpp2: not in enabled drivers build config 00:07:00.662 net/netvsc: not in enabled drivers build config 00:07:00.662 net/nfb: not in enabled drivers build config 00:07:00.662 net/nfp: not in enabled drivers build config 00:07:00.662 net/ngbe: not in enabled drivers build config 00:07:00.662 net/null: not in enabled drivers build config 00:07:00.662 net/octeontx: not in enabled drivers build config 00:07:00.662 net/octeon_ep: not in enabled drivers build config 00:07:00.662 net/pcap: not in enabled drivers build config 00:07:00.662 net/pfe: not in enabled drivers build config 00:07:00.662 net/qede: not in enabled drivers build config 00:07:00.662 net/ring: not in enabled drivers build config 00:07:00.662 net/sfc: not in enabled drivers build config 00:07:00.662 net/softnic: not in enabled drivers build config 00:07:00.662 net/tap: not in enabled drivers build config 00:07:00.662 net/thunderx: not in enabled drivers build config 00:07:00.662 net/txgbe: not in enabled drivers build config 00:07:00.662 net/vdev_netvsc: not in enabled drivers build config 00:07:00.662 net/vhost: not in enabled drivers build config 00:07:00.662 net/virtio: not in enabled drivers build config 00:07:00.662 net/vmxnet3: not in enabled drivers build config 00:07:00.662 raw/*: missing internal dependency, "rawdev" 00:07:00.662 crypto/armv8: not in enabled drivers build config 00:07:00.662 crypto/bcmfs: not in enabled drivers build config 00:07:00.662 crypto/caam_jr: not in enabled drivers build config 00:07:00.662 crypto/ccp: not in enabled drivers build config 00:07:00.662 crypto/cnxk: not in enabled drivers build config 00:07:00.662 crypto/dpaa_sec: not in enabled drivers build config 00:07:00.662 crypto/dpaa2_sec: not in enabled drivers build config 00:07:00.662 crypto/ipsec_mb: not in enabled drivers build config 00:07:00.662 crypto/mlx5: not in enabled drivers build config 00:07:00.662 crypto/mvsam: not in enabled drivers build config 00:07:00.662 crypto/nitrox: not in enabled drivers build config 00:07:00.662 crypto/null: not in enabled drivers build config 00:07:00.662 crypto/octeontx: not in enabled drivers build config 00:07:00.662 crypto/openssl: not in enabled drivers build config 00:07:00.662 crypto/scheduler: not in enabled drivers build config 00:07:00.662 crypto/uadk: not in enabled drivers build config 00:07:00.662 crypto/virtio: not in enabled drivers build config 00:07:00.662 compress/isal: not in enabled drivers build config 00:07:00.662 compress/mlx5: not in enabled drivers build config 00:07:00.662 compress/nitrox: not in enabled drivers build config 00:07:00.662 compress/octeontx: not in enabled drivers build config 00:07:00.662 compress/zlib: not in enabled drivers build config 00:07:00.662 regex/*: missing internal dependency, "regexdev" 00:07:00.662 ml/*: missing internal dependency, "mldev" 00:07:00.662 vdpa/ifc: not in enabled drivers build config 00:07:00.662 vdpa/mlx5: not in enabled drivers build config 00:07:00.662 vdpa/nfp: not in enabled drivers build config 00:07:00.662 vdpa/sfc: not in enabled drivers build config 00:07:00.662 event/*: missing internal dependency, "eventdev" 00:07:00.662 baseband/*: missing internal dependency, "bbdev" 00:07:00.662 gpu/*: missing internal dependency, "gpudev" 00:07:00.662 00:07:00.662 00:07:00.662 Build targets in project: 85 00:07:00.662 00:07:00.662 DPDK 24.03.0 00:07:00.662 00:07:00.662 User defined options 00:07:00.662 buildtype : debug 00:07:00.662 default_library : shared 00:07:00.662 libdir : lib 00:07:00.662 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:00.662 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:00.662 c_link_args : 00:07:00.662 cpu_instruction_set: native 00:07:00.662 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:07:00.662 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:07:00.662 enable_docs : false 00:07:00.662 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:07:00.662 enable_kmods : false 00:07:00.662 max_lcores : 128 00:07:00.662 tests : false 00:07:00.662 00:07:00.662 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:01.229 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:07:01.229 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:01.229 [2/268] Linking static target lib/librte_kvargs.a 00:07:01.229 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:01.229 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:01.229 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:01.229 [6/268] Linking static target lib/librte_log.a 00:07:01.795 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:01.795 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:02.053 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:02.053 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:02.053 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:02.053 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:02.053 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:02.053 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:02.311 [15/268] Linking static target lib/librte_telemetry.a 00:07:02.311 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:02.311 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:02.311 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.311 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:02.311 [20/268] Linking target lib/librte_log.so.24.1 00:07:02.569 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:02.570 [22/268] Linking target lib/librte_kvargs.so.24.1 00:07:02.570 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:02.827 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:03.085 [25/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.085 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:03.085 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:03.085 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:03.085 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:03.085 [30/268] Linking target lib/librte_telemetry.so.24.1 00:07:03.085 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:03.085 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:03.085 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:03.343 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:03.343 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:03.343 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:03.601 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:03.860 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:03.860 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:03.860 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:03.860 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:04.119 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:04.119 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:04.119 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:04.119 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:04.119 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:04.119 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:04.379 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:04.379 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:04.639 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:04.639 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:04.897 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:04.897 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:05.156 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:05.156 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:05.156 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:05.156 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:05.156 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:05.156 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:05.414 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:05.415 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:05.415 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:05.673 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:05.931 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:05.931 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:05.931 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:06.191 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:06.191 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:06.191 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:06.449 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:06.449 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:06.449 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:06.449 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:06.449 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:06.707 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:06.707 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:06.707 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:06.965 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:06.965 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:07.224 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:07.224 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:07.224 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:07.224 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:07.224 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:07.224 [85/268] Linking static target lib/librte_ring.a 00:07:07.481 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:07.481 [87/268] Linking static target lib/librte_eal.a 00:07:07.481 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:07.482 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:07.740 [90/268] Linking static target lib/librte_rcu.a 00:07:07.740 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:07.740 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:07.998 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:07.998 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:07.998 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:08.256 [96/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:08.256 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:08.256 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:08.256 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:08.256 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:08.256 [101/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:08.256 [102/268] Linking static target lib/librte_mbuf.a 00:07:08.256 [103/268] Linking static target lib/librte_mempool.a 00:07:08.515 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:08.515 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:08.773 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:08.773 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:08.773 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:08.773 [109/268] Linking static target lib/librte_meter.a 00:07:09.093 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:09.093 [111/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:09.093 [112/268] Linking static target lib/librte_net.a 00:07:09.351 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:09.352 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:09.352 [115/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:09.352 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:09.352 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:09.649 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:09.649 [119/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:10.215 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:10.215 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:10.215 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:10.215 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:10.472 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:10.472 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:10.472 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:10.472 [127/268] Linking static target lib/librte_pci.a 00:07:10.472 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:10.732 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:10.733 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:10.733 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:10.733 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:10.733 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:10.997 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:10.997 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:10.997 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:10.997 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:10.997 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:10.997 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:10.997 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:10.997 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:10.997 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:10.997 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:10.997 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:10.997 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:11.255 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:11.513 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:11.513 [148/268] Linking static target lib/librte_ethdev.a 00:07:11.772 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:11.772 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:11.772 [151/268] Linking static target lib/librte_cmdline.a 00:07:11.772 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:11.772 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:11.772 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:11.772 [155/268] Linking static target lib/librte_timer.a 00:07:12.031 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:12.031 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:12.290 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:12.290 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:12.549 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.549 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:12.549 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:12.549 [163/268] Linking static target lib/librte_compressdev.a 00:07:12.808 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:12.808 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:12.808 [166/268] Linking static target lib/librte_hash.a 00:07:12.808 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:13.070 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:13.070 [169/268] Linking static target lib/librte_dmadev.a 00:07:13.070 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:13.336 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:13.336 [172/268] Linking static target lib/librte_cryptodev.a 00:07:13.336 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:13.336 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:13.336 [175/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:13.594 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:13.594 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:13.853 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:13.853 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:14.112 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:14.112 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.112 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.112 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:14.112 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:14.679 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:14.679 [186/268] Linking static target lib/librte_power.a 00:07:14.679 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:14.679 [188/268] Linking static target lib/librte_reorder.a 00:07:14.679 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:14.679 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:14.679 [191/268] Linking static target lib/librte_security.a 00:07:14.679 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:14.937 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:15.196 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:15.196 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.454 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.712 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.712 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:15.712 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.970 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:15.970 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:15.970 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:16.229 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:16.229 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:16.229 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:16.490 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:16.490 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:16.490 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:16.756 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:16.756 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:16.756 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:17.014 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:17.014 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:17.014 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:17.014 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:17.014 [216/268] Linking static target drivers/librte_bus_vdev.a 00:07:17.014 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:17.014 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:17.272 [219/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:17.272 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:17.272 [221/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:17.272 [222/268] Linking static target drivers/librte_bus_pci.a 00:07:17.272 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.272 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:17.272 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:17.272 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:17.272 [227/268] Linking static target drivers/librte_mempool_ring.a 00:07:17.530 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:18.465 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:18.465 [230/268] Linking static target lib/librte_vhost.a 00:07:18.723 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:18.982 [232/268] Linking target lib/librte_eal.so.24.1 00:07:18.982 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:18.982 [234/268] Linking target lib/librte_pci.so.24.1 00:07:18.982 [235/268] Linking target lib/librte_dmadev.so.24.1 00:07:18.982 [236/268] Linking target lib/librte_ring.so.24.1 00:07:18.982 [237/268] Linking target lib/librte_meter.so.24.1 00:07:18.982 [238/268] Linking target lib/librte_timer.so.24.1 00:07:19.240 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:19.240 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:19.240 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:19.240 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:19.240 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:19.240 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:19.240 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:19.240 [246/268] Linking target lib/librte_mempool.so.24.1 00:07:19.240 [247/268] Linking target lib/librte_rcu.so.24.1 00:07:19.499 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:19.499 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:19.499 [250/268] Linking target lib/librte_mbuf.so.24.1 00:07:19.499 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:19.499 [252/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:19.499 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:19.757 [254/268] Linking target lib/librte_compressdev.so.24.1 00:07:19.757 [255/268] Linking target lib/librte_net.so.24.1 00:07:19.757 [256/268] Linking target lib/librte_reorder.so.24.1 00:07:19.757 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:07:19.757 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:19.757 [259/268] Linking target lib/librte_cmdline.so.24.1 00:07:19.757 [260/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:19.757 [261/268] Linking target lib/librte_hash.so.24.1 00:07:19.757 [262/268] Linking target lib/librte_ethdev.so.24.1 00:07:19.757 [263/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:20.016 [264/268] Linking target lib/librte_security.so.24.1 00:07:20.016 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:20.016 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:20.016 [267/268] Linking target lib/librte_power.so.24.1 00:07:20.016 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:20.016 INFO: autodetecting backend as ninja 00:07:20.016 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:07:52.084 CC lib/ut_mock/mock.o 00:07:52.084 CC lib/log/log.o 00:07:52.084 CC lib/log/log_deprecated.o 00:07:52.084 CC lib/log/log_flags.o 00:07:52.084 CC lib/ut/ut.o 00:07:52.084 LIB libspdk_ut_mock.a 00:07:52.084 LIB libspdk_ut.a 00:07:52.084 LIB libspdk_log.a 00:07:52.084 SO libspdk_ut_mock.so.6.0 00:07:52.084 SO libspdk_ut.so.2.0 00:07:52.084 SO libspdk_log.so.7.0 00:07:52.084 SYMLINK libspdk_ut_mock.so 00:07:52.084 SYMLINK libspdk_ut.so 00:07:52.084 SYMLINK libspdk_log.so 00:07:52.084 CC lib/util/base64.o 00:07:52.084 CC lib/util/bit_array.o 00:07:52.084 CC lib/util/cpuset.o 00:07:52.084 CC lib/ioat/ioat.o 00:07:52.084 CC lib/util/crc16.o 00:07:52.084 CC lib/util/crc32.o 00:07:52.084 CC lib/dma/dma.o 00:07:52.084 CC lib/util/crc32c.o 00:07:52.084 CXX lib/trace_parser/trace.o 00:07:52.084 CC lib/vfio_user/host/vfio_user_pci.o 00:07:52.084 CC lib/util/crc32_ieee.o 00:07:52.084 CC lib/vfio_user/host/vfio_user.o 00:07:52.084 CC lib/util/crc64.o 00:07:52.084 CC lib/util/dif.o 00:07:52.084 LIB libspdk_dma.a 00:07:52.084 CC lib/util/fd.o 00:07:52.084 SO libspdk_dma.so.5.0 00:07:52.084 CC lib/util/fd_group.o 00:07:52.085 SYMLINK libspdk_dma.so 00:07:52.085 CC lib/util/file.o 00:07:52.085 CC lib/util/hexlify.o 00:07:52.085 CC lib/util/iov.o 00:07:52.085 LIB libspdk_ioat.a 00:07:52.085 SO libspdk_ioat.so.7.0 00:07:52.085 LIB libspdk_vfio_user.a 00:07:52.085 CC lib/util/math.o 00:07:52.085 CC lib/util/net.o 00:07:52.085 SO libspdk_vfio_user.so.5.0 00:07:52.085 SYMLINK libspdk_ioat.so 00:07:52.085 CC lib/util/pipe.o 00:07:52.085 SYMLINK libspdk_vfio_user.so 00:07:52.085 CC lib/util/strerror_tls.o 00:07:52.085 CC lib/util/string.o 00:07:52.085 CC lib/util/uuid.o 00:07:52.085 CC lib/util/xor.o 00:07:52.085 CC lib/util/zipf.o 00:07:52.085 CC lib/util/md5.o 00:07:52.085 LIB libspdk_util.a 00:07:52.085 SO libspdk_util.so.10.0 00:07:52.085 LIB libspdk_trace_parser.a 00:07:52.085 SYMLINK libspdk_util.so 00:07:52.085 SO libspdk_trace_parser.so.6.0 00:07:52.085 SYMLINK libspdk_trace_parser.so 00:07:52.085 CC lib/json/json_parse.o 00:07:52.085 CC lib/json/json_write.o 00:07:52.085 CC lib/json/json_util.o 00:07:52.085 CC lib/idxd/idxd.o 00:07:52.085 CC lib/idxd/idxd_user.o 00:07:52.085 CC lib/rdma_provider/common.o 00:07:52.085 CC lib/env_dpdk/env.o 00:07:52.085 CC lib/conf/conf.o 00:07:52.085 CC lib/vmd/vmd.o 00:07:52.085 CC lib/rdma_utils/rdma_utils.o 00:07:52.085 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:52.085 CC lib/env_dpdk/memory.o 00:07:52.085 CC lib/env_dpdk/pci.o 00:07:52.085 LIB libspdk_conf.a 00:07:52.085 CC lib/vmd/led.o 00:07:52.085 SO libspdk_conf.so.6.0 00:07:52.085 LIB libspdk_json.a 00:07:52.085 LIB libspdk_rdma_utils.a 00:07:52.085 SYMLINK libspdk_conf.so 00:07:52.085 SO libspdk_json.so.6.0 00:07:52.085 SO libspdk_rdma_utils.so.1.0 00:07:52.085 CC lib/idxd/idxd_kernel.o 00:07:52.085 LIB libspdk_rdma_provider.a 00:07:52.085 SYMLINK libspdk_rdma_utils.so 00:07:52.085 CC lib/env_dpdk/init.o 00:07:52.085 SYMLINK libspdk_json.so 00:07:52.085 CC lib/env_dpdk/threads.o 00:07:52.085 SO libspdk_rdma_provider.so.6.0 00:07:52.085 CC lib/env_dpdk/pci_ioat.o 00:07:52.085 SYMLINK libspdk_rdma_provider.so 00:07:52.085 CC lib/env_dpdk/pci_virtio.o 00:07:52.085 CC lib/env_dpdk/pci_vmd.o 00:07:52.085 CC lib/env_dpdk/pci_idxd.o 00:07:52.085 CC lib/env_dpdk/pci_event.o 00:07:52.085 LIB libspdk_idxd.a 00:07:52.085 CC lib/jsonrpc/jsonrpc_server.o 00:07:52.085 SO libspdk_idxd.so.12.1 00:07:52.085 CC lib/env_dpdk/sigbus_handler.o 00:07:52.085 LIB libspdk_vmd.a 00:07:52.085 CC lib/env_dpdk/pci_dpdk.o 00:07:52.085 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:52.085 SO libspdk_vmd.so.6.0 00:07:52.085 SYMLINK libspdk_idxd.so 00:07:52.085 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:52.085 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:52.085 CC lib/jsonrpc/jsonrpc_client.o 00:07:52.085 SYMLINK libspdk_vmd.so 00:07:52.085 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:52.085 LIB libspdk_jsonrpc.a 00:07:52.085 SO libspdk_jsonrpc.so.6.0 00:07:52.085 SYMLINK libspdk_jsonrpc.so 00:07:52.085 CC lib/rpc/rpc.o 00:07:52.085 LIB libspdk_env_dpdk.a 00:07:52.085 SO libspdk_env_dpdk.so.15.0 00:07:52.085 LIB libspdk_rpc.a 00:07:52.085 SO libspdk_rpc.so.6.0 00:07:52.085 SYMLINK libspdk_rpc.so 00:07:52.085 SYMLINK libspdk_env_dpdk.so 00:07:52.085 CC lib/keyring/keyring.o 00:07:52.085 CC lib/keyring/keyring_rpc.o 00:07:52.085 CC lib/notify/notify.o 00:07:52.085 CC lib/notify/notify_rpc.o 00:07:52.085 CC lib/trace/trace_flags.o 00:07:52.085 CC lib/trace/trace.o 00:07:52.085 CC lib/trace/trace_rpc.o 00:07:52.085 LIB libspdk_notify.a 00:07:52.085 SO libspdk_notify.so.6.0 00:07:52.085 LIB libspdk_trace.a 00:07:52.085 LIB libspdk_keyring.a 00:07:52.085 SYMLINK libspdk_notify.so 00:07:52.085 SO libspdk_trace.so.11.0 00:07:52.085 SO libspdk_keyring.so.2.0 00:07:52.085 SYMLINK libspdk_trace.so 00:07:52.085 SYMLINK libspdk_keyring.so 00:07:52.085 CC lib/sock/sock.o 00:07:52.085 CC lib/sock/sock_rpc.o 00:07:52.085 CC lib/thread/iobuf.o 00:07:52.085 CC lib/thread/thread.o 00:07:52.085 LIB libspdk_sock.a 00:07:52.085 SO libspdk_sock.so.10.0 00:07:52.085 SYMLINK libspdk_sock.so 00:07:52.344 CC lib/nvme/nvme_ctrlr.o 00:07:52.344 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:52.344 CC lib/nvme/nvme_fabric.o 00:07:52.344 CC lib/nvme/nvme_pcie.o 00:07:52.344 CC lib/nvme/nvme_ns.o 00:07:52.344 CC lib/nvme/nvme_ns_cmd.o 00:07:52.344 CC lib/nvme/nvme.o 00:07:52.344 CC lib/nvme/nvme_qpair.o 00:07:52.344 CC lib/nvme/nvme_pcie_common.o 00:07:53.279 CC lib/nvme/nvme_quirks.o 00:07:53.279 CC lib/nvme/nvme_transport.o 00:07:53.279 LIB libspdk_thread.a 00:07:53.279 CC lib/nvme/nvme_discovery.o 00:07:53.279 SO libspdk_thread.so.10.2 00:07:53.279 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:53.279 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:53.279 SYMLINK libspdk_thread.so 00:07:53.279 CC lib/nvme/nvme_tcp.o 00:07:53.537 CC lib/nvme/nvme_opal.o 00:07:53.537 CC lib/nvme/nvme_io_msg.o 00:07:53.795 CC lib/nvme/nvme_poll_group.o 00:07:53.795 CC lib/nvme/nvme_zns.o 00:07:54.053 CC lib/nvme/nvme_stubs.o 00:07:54.053 CC lib/nvme/nvme_auth.o 00:07:54.053 CC lib/nvme/nvme_cuse.o 00:07:54.053 CC lib/nvme/nvme_rdma.o 00:07:54.311 CC lib/blob/blobstore.o 00:07:54.311 CC lib/accel/accel.o 00:07:54.570 CC lib/init/json_config.o 00:07:54.828 CC lib/fsdev/fsdev.o 00:07:54.828 CC lib/virtio/virtio.o 00:07:54.828 CC lib/virtio/virtio_vhost_user.o 00:07:54.828 CC lib/init/subsystem.o 00:07:54.828 CC lib/init/subsystem_rpc.o 00:07:55.086 CC lib/init/rpc.o 00:07:55.086 CC lib/virtio/virtio_vfio_user.o 00:07:55.086 CC lib/fsdev/fsdev_io.o 00:07:55.086 CC lib/fsdev/fsdev_rpc.o 00:07:55.086 CC lib/accel/accel_rpc.o 00:07:55.086 CC lib/virtio/virtio_pci.o 00:07:55.086 LIB libspdk_init.a 00:07:55.086 SO libspdk_init.so.6.0 00:07:55.344 CC lib/accel/accel_sw.o 00:07:55.344 CC lib/blob/request.o 00:07:55.344 SYMLINK libspdk_init.so 00:07:55.344 CC lib/blob/zeroes.o 00:07:55.344 CC lib/blob/blob_bs_dev.o 00:07:55.344 LIB libspdk_fsdev.a 00:07:55.344 LIB libspdk_virtio.a 00:07:55.344 LIB libspdk_nvme.a 00:07:55.344 SO libspdk_fsdev.so.1.0 00:07:55.344 SO libspdk_virtio.so.7.0 00:07:55.603 CC lib/event/app.o 00:07:55.603 CC lib/event/reactor.o 00:07:55.603 CC lib/event/log_rpc.o 00:07:55.603 SYMLINK libspdk_fsdev.so 00:07:55.603 CC lib/event/app_rpc.o 00:07:55.603 SYMLINK libspdk_virtio.so 00:07:55.603 LIB libspdk_accel.a 00:07:55.603 CC lib/event/scheduler_static.o 00:07:55.603 SO libspdk_accel.so.16.0 00:07:55.603 SO libspdk_nvme.so.14.0 00:07:55.603 SYMLINK libspdk_accel.so 00:07:55.603 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:55.861 SYMLINK libspdk_nvme.so 00:07:55.861 CC lib/bdev/bdev_rpc.o 00:07:55.861 CC lib/bdev/bdev.o 00:07:55.861 CC lib/bdev/bdev_zone.o 00:07:55.861 CC lib/bdev/scsi_nvme.o 00:07:55.861 CC lib/bdev/part.o 00:07:55.861 LIB libspdk_event.a 00:07:56.119 SO libspdk_event.so.15.0 00:07:56.119 SYMLINK libspdk_event.so 00:07:56.377 LIB libspdk_fuse_dispatcher.a 00:07:56.377 SO libspdk_fuse_dispatcher.so.1.0 00:07:56.377 SYMLINK libspdk_fuse_dispatcher.so 00:07:57.753 LIB libspdk_blob.a 00:07:57.753 SO libspdk_blob.so.11.0 00:07:57.753 SYMLINK libspdk_blob.so 00:07:57.753 CC lib/blobfs/blobfs.o 00:07:57.753 CC lib/lvol/lvol.o 00:07:57.753 CC lib/blobfs/tree.o 00:07:58.687 LIB libspdk_blobfs.a 00:07:58.687 SO libspdk_blobfs.so.10.0 00:07:58.945 SYMLINK libspdk_blobfs.so 00:07:58.945 LIB libspdk_bdev.a 00:07:58.945 LIB libspdk_lvol.a 00:07:58.945 SO libspdk_bdev.so.17.0 00:07:58.945 SO libspdk_lvol.so.10.0 00:07:59.203 SYMLINK libspdk_lvol.so 00:07:59.203 SYMLINK libspdk_bdev.so 00:07:59.203 CC lib/scsi/dev.o 00:07:59.203 CC lib/scsi/lun.o 00:07:59.203 CC lib/ftl/ftl_layout.o 00:07:59.203 CC lib/nbd/nbd.o 00:07:59.203 CC lib/scsi/port.o 00:07:59.203 CC lib/ftl/ftl_core.o 00:07:59.203 CC lib/ftl/ftl_init.o 00:07:59.203 CC lib/scsi/scsi.o 00:07:59.203 CC lib/ublk/ublk.o 00:07:59.203 CC lib/nvmf/ctrlr.o 00:07:59.462 CC lib/nvmf/ctrlr_discovery.o 00:07:59.462 CC lib/ftl/ftl_debug.o 00:07:59.462 CC lib/ftl/ftl_io.o 00:07:59.462 CC lib/scsi/scsi_bdev.o 00:07:59.720 CC lib/nbd/nbd_rpc.o 00:07:59.720 CC lib/ftl/ftl_sb.o 00:07:59.720 CC lib/ublk/ublk_rpc.o 00:07:59.720 CC lib/scsi/scsi_pr.o 00:07:59.720 CC lib/scsi/scsi_rpc.o 00:07:59.720 CC lib/ftl/ftl_l2p.o 00:07:59.720 LIB libspdk_nbd.a 00:07:59.720 SO libspdk_nbd.so.7.0 00:07:59.979 CC lib/scsi/task.o 00:07:59.979 SYMLINK libspdk_nbd.so 00:07:59.979 CC lib/ftl/ftl_l2p_flat.o 00:07:59.979 CC lib/ftl/ftl_nv_cache.o 00:07:59.979 CC lib/ftl/ftl_band.o 00:07:59.979 CC lib/nvmf/ctrlr_bdev.o 00:07:59.979 LIB libspdk_ublk.a 00:07:59.979 CC lib/ftl/ftl_band_ops.o 00:07:59.979 SO libspdk_ublk.so.3.0 00:07:59.979 CC lib/ftl/ftl_writer.o 00:08:00.238 SYMLINK libspdk_ublk.so 00:08:00.238 CC lib/ftl/ftl_rq.o 00:08:00.238 CC lib/nvmf/subsystem.o 00:08:00.238 LIB libspdk_scsi.a 00:08:00.238 CC lib/ftl/ftl_reloc.o 00:08:00.238 SO libspdk_scsi.so.9.0 00:08:00.238 SYMLINK libspdk_scsi.so 00:08:00.238 CC lib/nvmf/nvmf.o 00:08:00.238 CC lib/nvmf/nvmf_rpc.o 00:08:00.238 CC lib/nvmf/transport.o 00:08:00.238 CC lib/nvmf/tcp.o 00:08:00.497 CC lib/ftl/ftl_l2p_cache.o 00:08:00.755 CC lib/iscsi/conn.o 00:08:00.755 CC lib/iscsi/init_grp.o 00:08:01.013 CC lib/iscsi/iscsi.o 00:08:01.013 CC lib/ftl/ftl_p2l.o 00:08:01.013 CC lib/iscsi/param.o 00:08:01.013 CC lib/nvmf/stubs.o 00:08:01.272 CC lib/nvmf/mdns_server.o 00:08:01.272 CC lib/iscsi/portal_grp.o 00:08:01.272 CC lib/iscsi/tgt_node.o 00:08:01.272 CC lib/nvmf/rdma.o 00:08:01.530 CC lib/ftl/ftl_p2l_log.o 00:08:01.530 CC lib/nvmf/auth.o 00:08:01.530 CC lib/vhost/vhost.o 00:08:01.530 CC lib/vhost/vhost_rpc.o 00:08:01.530 CC lib/iscsi/iscsi_subsystem.o 00:08:01.787 CC lib/iscsi/iscsi_rpc.o 00:08:01.787 CC lib/iscsi/task.o 00:08:01.787 CC lib/ftl/mngt/ftl_mngt.o 00:08:02.045 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:02.045 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:02.045 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:02.045 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:02.304 CC lib/vhost/vhost_scsi.o 00:08:02.304 CC lib/vhost/vhost_blk.o 00:08:02.304 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:02.304 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:02.304 CC lib/vhost/rte_vhost_user.o 00:08:02.304 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:02.304 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:02.562 LIB libspdk_iscsi.a 00:08:02.562 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:02.562 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:02.562 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:02.562 SO libspdk_iscsi.so.8.0 00:08:02.562 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:02.562 CC lib/ftl/utils/ftl_conf.o 00:08:02.820 SYMLINK libspdk_iscsi.so 00:08:02.820 CC lib/ftl/utils/ftl_md.o 00:08:02.820 CC lib/ftl/utils/ftl_mempool.o 00:08:02.820 CC lib/ftl/utils/ftl_bitmap.o 00:08:02.820 CC lib/ftl/utils/ftl_property.o 00:08:02.820 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:02.820 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:02.820 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:03.079 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:03.079 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:03.079 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:03.079 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:08:03.079 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:03.079 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:03.079 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:03.079 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:03.079 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:08:03.337 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:08:03.337 CC lib/ftl/base/ftl_base_dev.o 00:08:03.337 CC lib/ftl/base/ftl_base_bdev.o 00:08:03.337 CC lib/ftl/ftl_trace.o 00:08:03.337 LIB libspdk_vhost.a 00:08:03.337 SO libspdk_vhost.so.8.0 00:08:03.596 LIB libspdk_nvmf.a 00:08:03.596 SYMLINK libspdk_vhost.so 00:08:03.596 LIB libspdk_ftl.a 00:08:03.596 SO libspdk_nvmf.so.19.0 00:08:03.854 SYMLINK libspdk_nvmf.so 00:08:03.854 SO libspdk_ftl.so.9.0 00:08:04.112 SYMLINK libspdk_ftl.so 00:08:04.679 CC module/env_dpdk/env_dpdk_rpc.o 00:08:04.679 CC module/scheduler/gscheduler/gscheduler.o 00:08:04.679 CC module/blob/bdev/blob_bdev.o 00:08:04.679 CC module/accel/error/accel_error.o 00:08:04.679 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:04.679 CC module/accel/ioat/accel_ioat.o 00:08:04.679 CC module/fsdev/aio/fsdev_aio.o 00:08:04.679 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:04.679 CC module/sock/posix/posix.o 00:08:04.679 CC module/keyring/file/keyring.o 00:08:04.679 LIB libspdk_env_dpdk_rpc.a 00:08:04.679 SO libspdk_env_dpdk_rpc.so.6.0 00:08:04.679 LIB libspdk_scheduler_gscheduler.a 00:08:04.679 SYMLINK libspdk_env_dpdk_rpc.so 00:08:04.679 CC module/keyring/file/keyring_rpc.o 00:08:04.937 SO libspdk_scheduler_gscheduler.so.4.0 00:08:04.937 CC module/accel/ioat/accel_ioat_rpc.o 00:08:04.937 CC module/accel/error/accel_error_rpc.o 00:08:04.937 LIB libspdk_scheduler_dpdk_governor.a 00:08:04.937 LIB libspdk_scheduler_dynamic.a 00:08:04.937 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:04.937 SO libspdk_scheduler_dynamic.so.4.0 00:08:04.937 SYMLINK libspdk_scheduler_gscheduler.so 00:08:04.937 SYMLINK libspdk_scheduler_dynamic.so 00:08:04.937 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:04.937 LIB libspdk_blob_bdev.a 00:08:04.937 LIB libspdk_keyring_file.a 00:08:04.937 SO libspdk_blob_bdev.so.11.0 00:08:04.937 LIB libspdk_accel_ioat.a 00:08:04.937 CC module/accel/dsa/accel_dsa.o 00:08:04.937 SO libspdk_keyring_file.so.2.0 00:08:04.937 LIB libspdk_accel_error.a 00:08:04.937 SO libspdk_accel_ioat.so.6.0 00:08:04.937 SO libspdk_accel_error.so.2.0 00:08:04.937 SYMLINK libspdk_blob_bdev.so 00:08:04.937 SYMLINK libspdk_keyring_file.so 00:08:04.937 CC module/accel/dsa/accel_dsa_rpc.o 00:08:04.937 CC module/keyring/linux/keyring.o 00:08:04.937 CC module/fsdev/aio/fsdev_aio_rpc.o 00:08:05.196 SYMLINK libspdk_accel_ioat.so 00:08:05.196 SYMLINK libspdk_accel_error.so 00:08:05.196 CC module/fsdev/aio/linux_aio_mgr.o 00:08:05.196 CC module/accel/iaa/accel_iaa.o 00:08:05.196 CC module/sock/uring/uring.o 00:08:05.196 CC module/accel/iaa/accel_iaa_rpc.o 00:08:05.196 CC module/keyring/linux/keyring_rpc.o 00:08:05.196 LIB libspdk_accel_dsa.a 00:08:05.196 LIB libspdk_fsdev_aio.a 00:08:05.196 SO libspdk_accel_dsa.so.5.0 00:08:05.453 SO libspdk_fsdev_aio.so.1.0 00:08:05.453 CC module/bdev/delay/vbdev_delay.o 00:08:05.453 LIB libspdk_accel_iaa.a 00:08:05.453 LIB libspdk_keyring_linux.a 00:08:05.453 SYMLINK libspdk_accel_dsa.so 00:08:05.453 LIB libspdk_sock_posix.a 00:08:05.453 SO libspdk_accel_iaa.so.3.0 00:08:05.453 CC module/bdev/error/vbdev_error.o 00:08:05.453 SYMLINK libspdk_fsdev_aio.so 00:08:05.453 SO libspdk_keyring_linux.so.1.0 00:08:05.453 SO libspdk_sock_posix.so.6.0 00:08:05.453 SYMLINK libspdk_keyring_linux.so 00:08:05.453 SYMLINK libspdk_accel_iaa.so 00:08:05.453 CC module/bdev/error/vbdev_error_rpc.o 00:08:05.453 CC module/bdev/gpt/gpt.o 00:08:05.453 SYMLINK libspdk_sock_posix.so 00:08:05.453 CC module/blobfs/bdev/blobfs_bdev.o 00:08:05.453 CC module/bdev/lvol/vbdev_lvol.o 00:08:05.711 CC module/bdev/malloc/bdev_malloc.o 00:08:05.711 CC module/bdev/null/bdev_null.o 00:08:05.711 CC module/bdev/nvme/bdev_nvme.o 00:08:05.711 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:05.711 LIB libspdk_bdev_error.a 00:08:05.711 CC module/bdev/gpt/vbdev_gpt.o 00:08:05.711 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:05.711 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:05.711 SO libspdk_bdev_error.so.6.0 00:08:05.711 SYMLINK libspdk_bdev_error.so 00:08:05.711 CC module/bdev/nvme/nvme_rpc.o 00:08:05.711 LIB libspdk_sock_uring.a 00:08:05.969 SO libspdk_sock_uring.so.5.0 00:08:05.969 SYMLINK libspdk_sock_uring.so 00:08:05.969 LIB libspdk_blobfs_bdev.a 00:08:05.969 CC module/bdev/nvme/bdev_mdns_client.o 00:08:05.969 LIB libspdk_bdev_delay.a 00:08:05.969 CC module/bdev/null/bdev_null_rpc.o 00:08:05.969 SO libspdk_blobfs_bdev.so.6.0 00:08:05.969 SO libspdk_bdev_delay.so.6.0 00:08:05.969 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:05.969 LIB libspdk_bdev_gpt.a 00:08:05.969 SYMLINK libspdk_blobfs_bdev.so 00:08:05.969 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:05.969 SYMLINK libspdk_bdev_delay.so 00:08:05.969 SO libspdk_bdev_gpt.so.6.0 00:08:05.969 CC module/bdev/nvme/vbdev_opal.o 00:08:06.226 SYMLINK libspdk_bdev_gpt.so 00:08:06.226 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:06.226 LIB libspdk_bdev_null.a 00:08:06.226 SO libspdk_bdev_null.so.6.0 00:08:06.226 LIB libspdk_bdev_malloc.a 00:08:06.226 CC module/bdev/passthru/vbdev_passthru.o 00:08:06.226 SO libspdk_bdev_malloc.so.6.0 00:08:06.226 SYMLINK libspdk_bdev_null.so 00:08:06.226 CC module/bdev/raid/bdev_raid.o 00:08:06.226 CC module/bdev/split/vbdev_split.o 00:08:06.226 SYMLINK libspdk_bdev_malloc.so 00:08:06.226 CC module/bdev/split/vbdev_split_rpc.o 00:08:06.226 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:06.226 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:06.484 CC module/bdev/raid/bdev_raid_rpc.o 00:08:06.484 LIB libspdk_bdev_lvol.a 00:08:06.484 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:06.484 SO libspdk_bdev_lvol.so.6.0 00:08:06.484 SYMLINK libspdk_bdev_lvol.so 00:08:06.484 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:06.484 CC module/bdev/raid/bdev_raid_sb.o 00:08:06.484 LIB libspdk_bdev_split.a 00:08:06.484 LIB libspdk_bdev_passthru.a 00:08:06.484 SO libspdk_bdev_split.so.6.0 00:08:06.741 SO libspdk_bdev_passthru.so.6.0 00:08:06.741 SYMLINK libspdk_bdev_split.so 00:08:06.741 CC module/bdev/raid/raid0.o 00:08:06.741 SYMLINK libspdk_bdev_passthru.so 00:08:06.741 CC module/bdev/raid/raid1.o 00:08:06.741 CC module/bdev/uring/bdev_uring.o 00:08:06.741 CC module/bdev/aio/bdev_aio.o 00:08:06.741 LIB libspdk_bdev_zone_block.a 00:08:06.741 SO libspdk_bdev_zone_block.so.6.0 00:08:06.741 CC module/bdev/aio/bdev_aio_rpc.o 00:08:06.741 CC module/bdev/ftl/bdev_ftl.o 00:08:06.741 CC module/bdev/iscsi/bdev_iscsi.o 00:08:06.741 SYMLINK libspdk_bdev_zone_block.so 00:08:06.998 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:06.998 CC module/bdev/raid/concat.o 00:08:06.998 CC module/bdev/uring/bdev_uring_rpc.o 00:08:06.998 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:06.998 LIB libspdk_bdev_aio.a 00:08:07.256 SO libspdk_bdev_aio.so.6.0 00:08:07.256 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:07.256 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:07.256 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:07.256 LIB libspdk_bdev_uring.a 00:08:07.256 SYMLINK libspdk_bdev_aio.so 00:08:07.256 LIB libspdk_bdev_iscsi.a 00:08:07.256 SO libspdk_bdev_uring.so.6.0 00:08:07.256 SO libspdk_bdev_iscsi.so.6.0 00:08:07.256 LIB libspdk_bdev_ftl.a 00:08:07.256 SYMLINK libspdk_bdev_uring.so 00:08:07.256 SO libspdk_bdev_ftl.so.6.0 00:08:07.256 SYMLINK libspdk_bdev_iscsi.so 00:08:07.256 LIB libspdk_bdev_raid.a 00:08:07.514 SYMLINK libspdk_bdev_ftl.so 00:08:07.514 SO libspdk_bdev_raid.so.6.0 00:08:07.514 SYMLINK libspdk_bdev_raid.so 00:08:07.773 LIB libspdk_bdev_virtio.a 00:08:07.773 SO libspdk_bdev_virtio.so.6.0 00:08:07.773 SYMLINK libspdk_bdev_virtio.so 00:08:08.031 LIB libspdk_bdev_nvme.a 00:08:08.031 SO libspdk_bdev_nvme.so.7.0 00:08:08.288 SYMLINK libspdk_bdev_nvme.so 00:08:08.854 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:08.854 CC module/event/subsystems/scheduler/scheduler.o 00:08:08.854 CC module/event/subsystems/keyring/keyring.o 00:08:08.854 CC module/event/subsystems/fsdev/fsdev.o 00:08:08.854 CC module/event/subsystems/sock/sock.o 00:08:08.854 CC module/event/subsystems/iobuf/iobuf.o 00:08:08.854 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:08.854 CC module/event/subsystems/vmd/vmd.o 00:08:08.854 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:08.854 LIB libspdk_event_fsdev.a 00:08:08.854 LIB libspdk_event_scheduler.a 00:08:08.854 LIB libspdk_event_keyring.a 00:08:08.854 LIB libspdk_event_vhost_blk.a 00:08:08.854 SO libspdk_event_fsdev.so.1.0 00:08:09.112 SO libspdk_event_scheduler.so.4.0 00:08:09.112 LIB libspdk_event_sock.a 00:08:09.112 LIB libspdk_event_vmd.a 00:08:09.112 SO libspdk_event_keyring.so.1.0 00:08:09.112 LIB libspdk_event_iobuf.a 00:08:09.112 SO libspdk_event_vhost_blk.so.3.0 00:08:09.112 SO libspdk_event_sock.so.5.0 00:08:09.112 SO libspdk_event_vmd.so.6.0 00:08:09.112 SYMLINK libspdk_event_fsdev.so 00:08:09.112 SYMLINK libspdk_event_scheduler.so 00:08:09.112 SO libspdk_event_iobuf.so.3.0 00:08:09.112 SYMLINK libspdk_event_keyring.so 00:08:09.112 SYMLINK libspdk_event_sock.so 00:08:09.112 SYMLINK libspdk_event_vhost_blk.so 00:08:09.112 SYMLINK libspdk_event_vmd.so 00:08:09.112 SYMLINK libspdk_event_iobuf.so 00:08:09.370 CC module/event/subsystems/accel/accel.o 00:08:09.628 LIB libspdk_event_accel.a 00:08:09.628 SO libspdk_event_accel.so.6.0 00:08:09.628 SYMLINK libspdk_event_accel.so 00:08:09.886 CC module/event/subsystems/bdev/bdev.o 00:08:10.144 LIB libspdk_event_bdev.a 00:08:10.144 SO libspdk_event_bdev.so.6.0 00:08:10.144 SYMLINK libspdk_event_bdev.so 00:08:10.402 CC module/event/subsystems/scsi/scsi.o 00:08:10.402 CC module/event/subsystems/nbd/nbd.o 00:08:10.402 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:10.402 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:10.402 CC module/event/subsystems/ublk/ublk.o 00:08:10.661 LIB libspdk_event_nbd.a 00:08:10.661 LIB libspdk_event_ublk.a 00:08:10.661 LIB libspdk_event_scsi.a 00:08:10.661 SO libspdk_event_ublk.so.3.0 00:08:10.661 SO libspdk_event_nbd.so.6.0 00:08:10.661 SO libspdk_event_scsi.so.6.0 00:08:10.661 SYMLINK libspdk_event_nbd.so 00:08:10.661 SYMLINK libspdk_event_ublk.so 00:08:10.661 SYMLINK libspdk_event_scsi.so 00:08:10.920 LIB libspdk_event_nvmf.a 00:08:10.920 SO libspdk_event_nvmf.so.6.0 00:08:10.920 SYMLINK libspdk_event_nvmf.so 00:08:10.920 CC module/event/subsystems/iscsi/iscsi.o 00:08:10.920 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:11.178 LIB libspdk_event_iscsi.a 00:08:11.178 LIB libspdk_event_vhost_scsi.a 00:08:11.178 SO libspdk_event_iscsi.so.6.0 00:08:11.178 SO libspdk_event_vhost_scsi.so.3.0 00:08:11.435 SYMLINK libspdk_event_iscsi.so 00:08:11.435 SYMLINK libspdk_event_vhost_scsi.so 00:08:11.435 SO libspdk.so.6.0 00:08:11.435 SYMLINK libspdk.so 00:08:11.693 CC app/trace_record/trace_record.o 00:08:11.693 CXX app/trace/trace.o 00:08:11.693 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:11.693 CC app/iscsi_tgt/iscsi_tgt.o 00:08:11.693 CC app/nvmf_tgt/nvmf_main.o 00:08:11.951 CC examples/ioat/perf/perf.o 00:08:11.951 CC test/thread/poller_perf/poller_perf.o 00:08:11.951 CC examples/util/zipf/zipf.o 00:08:11.951 CC test/dma/test_dma/test_dma.o 00:08:11.951 CC test/app/bdev_svc/bdev_svc.o 00:08:11.951 LINK spdk_trace_record 00:08:11.951 LINK interrupt_tgt 00:08:11.951 LINK nvmf_tgt 00:08:11.951 LINK iscsi_tgt 00:08:11.951 LINK poller_perf 00:08:12.210 LINK ioat_perf 00:08:12.210 LINK zipf 00:08:12.210 LINK bdev_svc 00:08:12.210 LINK spdk_trace 00:08:12.210 TEST_HEADER include/spdk/accel.h 00:08:12.210 TEST_HEADER include/spdk/accel_module.h 00:08:12.210 CC app/spdk_lspci/spdk_lspci.o 00:08:12.210 TEST_HEADER include/spdk/assert.h 00:08:12.210 TEST_HEADER include/spdk/barrier.h 00:08:12.210 TEST_HEADER include/spdk/base64.h 00:08:12.210 TEST_HEADER include/spdk/bdev.h 00:08:12.210 TEST_HEADER include/spdk/bdev_module.h 00:08:12.210 TEST_HEADER include/spdk/bdev_zone.h 00:08:12.210 TEST_HEADER include/spdk/bit_array.h 00:08:12.210 TEST_HEADER include/spdk/bit_pool.h 00:08:12.210 CC app/spdk_tgt/spdk_tgt.o 00:08:12.470 TEST_HEADER include/spdk/blob_bdev.h 00:08:12.470 CC examples/ioat/verify/verify.o 00:08:12.470 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:12.470 TEST_HEADER include/spdk/blobfs.h 00:08:12.470 TEST_HEADER include/spdk/blob.h 00:08:12.470 TEST_HEADER include/spdk/conf.h 00:08:12.470 TEST_HEADER include/spdk/config.h 00:08:12.470 TEST_HEADER include/spdk/cpuset.h 00:08:12.470 TEST_HEADER include/spdk/crc16.h 00:08:12.470 TEST_HEADER include/spdk/crc32.h 00:08:12.470 TEST_HEADER include/spdk/crc64.h 00:08:12.470 TEST_HEADER include/spdk/dif.h 00:08:12.470 TEST_HEADER include/spdk/dma.h 00:08:12.470 TEST_HEADER include/spdk/endian.h 00:08:12.470 TEST_HEADER include/spdk/env_dpdk.h 00:08:12.470 TEST_HEADER include/spdk/env.h 00:08:12.470 TEST_HEADER include/spdk/event.h 00:08:12.470 TEST_HEADER include/spdk/fd_group.h 00:08:12.470 TEST_HEADER include/spdk/fd.h 00:08:12.470 TEST_HEADER include/spdk/file.h 00:08:12.470 TEST_HEADER include/spdk/fsdev.h 00:08:12.470 TEST_HEADER include/spdk/fsdev_module.h 00:08:12.470 TEST_HEADER include/spdk/ftl.h 00:08:12.470 CC app/spdk_nvme_perf/perf.o 00:08:12.470 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:12.470 TEST_HEADER include/spdk/gpt_spec.h 00:08:12.470 TEST_HEADER include/spdk/hexlify.h 00:08:12.470 TEST_HEADER include/spdk/histogram_data.h 00:08:12.470 TEST_HEADER include/spdk/idxd.h 00:08:12.470 TEST_HEADER include/spdk/idxd_spec.h 00:08:12.470 TEST_HEADER include/spdk/init.h 00:08:12.470 TEST_HEADER include/spdk/ioat.h 00:08:12.470 TEST_HEADER include/spdk/ioat_spec.h 00:08:12.470 TEST_HEADER include/spdk/iscsi_spec.h 00:08:12.470 TEST_HEADER include/spdk/json.h 00:08:12.470 LINK test_dma 00:08:12.470 TEST_HEADER include/spdk/jsonrpc.h 00:08:12.470 TEST_HEADER include/spdk/keyring.h 00:08:12.470 TEST_HEADER include/spdk/keyring_module.h 00:08:12.470 TEST_HEADER include/spdk/likely.h 00:08:12.470 TEST_HEADER include/spdk/log.h 00:08:12.470 TEST_HEADER include/spdk/lvol.h 00:08:12.470 TEST_HEADER include/spdk/md5.h 00:08:12.470 CC app/spdk_nvme_identify/identify.o 00:08:12.470 TEST_HEADER include/spdk/memory.h 00:08:12.470 TEST_HEADER include/spdk/mmio.h 00:08:12.470 TEST_HEADER include/spdk/nbd.h 00:08:12.470 TEST_HEADER include/spdk/net.h 00:08:12.470 TEST_HEADER include/spdk/notify.h 00:08:12.470 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:12.470 TEST_HEADER include/spdk/nvme.h 00:08:12.470 TEST_HEADER include/spdk/nvme_intel.h 00:08:12.470 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:12.470 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:12.470 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:12.470 TEST_HEADER include/spdk/nvme_spec.h 00:08:12.470 TEST_HEADER include/spdk/nvme_zns.h 00:08:12.470 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:12.470 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:12.470 LINK spdk_lspci 00:08:12.470 TEST_HEADER include/spdk/nvmf.h 00:08:12.470 TEST_HEADER include/spdk/nvmf_spec.h 00:08:12.470 TEST_HEADER include/spdk/nvmf_transport.h 00:08:12.470 TEST_HEADER include/spdk/opal.h 00:08:12.470 TEST_HEADER include/spdk/opal_spec.h 00:08:12.470 TEST_HEADER include/spdk/pci_ids.h 00:08:12.470 TEST_HEADER include/spdk/pipe.h 00:08:12.470 TEST_HEADER include/spdk/queue.h 00:08:12.470 TEST_HEADER include/spdk/reduce.h 00:08:12.470 TEST_HEADER include/spdk/rpc.h 00:08:12.470 TEST_HEADER include/spdk/scheduler.h 00:08:12.470 TEST_HEADER include/spdk/scsi.h 00:08:12.470 TEST_HEADER include/spdk/scsi_spec.h 00:08:12.470 TEST_HEADER include/spdk/sock.h 00:08:12.470 TEST_HEADER include/spdk/stdinc.h 00:08:12.470 TEST_HEADER include/spdk/string.h 00:08:12.470 TEST_HEADER include/spdk/thread.h 00:08:12.470 TEST_HEADER include/spdk/trace.h 00:08:12.470 TEST_HEADER include/spdk/trace_parser.h 00:08:12.470 TEST_HEADER include/spdk/tree.h 00:08:12.470 TEST_HEADER include/spdk/ublk.h 00:08:12.470 TEST_HEADER include/spdk/util.h 00:08:12.470 TEST_HEADER include/spdk/uuid.h 00:08:12.470 TEST_HEADER include/spdk/version.h 00:08:12.470 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:12.470 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:12.470 TEST_HEADER include/spdk/vhost.h 00:08:12.470 TEST_HEADER include/spdk/vmd.h 00:08:12.470 TEST_HEADER include/spdk/xor.h 00:08:12.470 TEST_HEADER include/spdk/zipf.h 00:08:12.470 CXX test/cpp_headers/accel.o 00:08:12.470 LINK spdk_tgt 00:08:12.470 LINK verify 00:08:12.728 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:12.728 CC test/env/mem_callbacks/mem_callbacks.o 00:08:12.728 CXX test/cpp_headers/accel_module.o 00:08:12.728 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:12.728 CC test/event/event_perf/event_perf.o 00:08:12.986 LINK nvme_fuzz 00:08:12.986 CXX test/cpp_headers/assert.o 00:08:12.986 CC examples/thread/thread/thread_ex.o 00:08:12.986 CC test/event/reactor/reactor.o 00:08:12.986 LINK event_perf 00:08:12.986 CXX test/cpp_headers/barrier.o 00:08:13.244 CC test/event/reactor_perf/reactor_perf.o 00:08:13.244 LINK reactor 00:08:13.244 CXX test/cpp_headers/base64.o 00:08:13.244 LINK vhost_fuzz 00:08:13.244 LINK thread 00:08:13.244 CXX test/cpp_headers/bdev.o 00:08:13.244 LINK reactor_perf 00:08:13.244 LINK spdk_nvme_perf 00:08:13.244 LINK spdk_nvme_identify 00:08:13.244 LINK mem_callbacks 00:08:13.502 CXX test/cpp_headers/bdev_module.o 00:08:13.502 CXX test/cpp_headers/bdev_zone.o 00:08:13.502 CC examples/sock/hello_world/hello_sock.o 00:08:13.502 CXX test/cpp_headers/bit_array.o 00:08:13.502 CC test/event/app_repeat/app_repeat.o 00:08:13.502 CC examples/vmd/lsvmd/lsvmd.o 00:08:13.502 CC app/spdk_nvme_discover/discovery_aer.o 00:08:13.502 CC test/env/vtophys/vtophys.o 00:08:13.502 CC examples/idxd/perf/perf.o 00:08:13.760 CXX test/cpp_headers/bit_pool.o 00:08:13.760 LINK lsvmd 00:08:13.760 LINK hello_sock 00:08:13.760 CC examples/vmd/led/led.o 00:08:13.760 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:13.760 LINK vtophys 00:08:13.760 LINK app_repeat 00:08:13.760 LINK spdk_nvme_discover 00:08:13.760 CXX test/cpp_headers/blob_bdev.o 00:08:14.018 CXX test/cpp_headers/blobfs_bdev.o 00:08:14.018 LINK led 00:08:14.018 CXX test/cpp_headers/blobfs.o 00:08:14.018 LINK env_dpdk_post_init 00:08:14.018 CXX test/cpp_headers/blob.o 00:08:14.018 LINK idxd_perf 00:08:14.018 CC app/spdk_top/spdk_top.o 00:08:14.018 CC test/event/scheduler/scheduler.o 00:08:14.278 CXX test/cpp_headers/conf.o 00:08:14.278 CC test/env/memory/memory_ut.o 00:08:14.278 CC app/vhost/vhost.o 00:08:14.278 CC app/spdk_dd/spdk_dd.o 00:08:14.278 CC test/env/pci/pci_ut.o 00:08:14.278 LINK iscsi_fuzz 00:08:14.278 LINK scheduler 00:08:14.278 CXX test/cpp_headers/config.o 00:08:14.278 CC app/fio/nvme/fio_plugin.o 00:08:14.278 CC examples/accel/perf/accel_perf.o 00:08:14.278 CXX test/cpp_headers/cpuset.o 00:08:14.537 LINK vhost 00:08:14.537 CXX test/cpp_headers/crc16.o 00:08:14.537 CC test/app/histogram_perf/histogram_perf.o 00:08:14.537 CC test/app/jsoncat/jsoncat.o 00:08:14.795 CXX test/cpp_headers/crc32.o 00:08:14.795 LINK pci_ut 00:08:14.795 LINK spdk_dd 00:08:14.795 LINK histogram_perf 00:08:14.795 LINK jsoncat 00:08:14.795 CC test/rpc_client/rpc_client_test.o 00:08:14.795 CXX test/cpp_headers/crc64.o 00:08:14.795 LINK accel_perf 00:08:14.795 LINK spdk_nvme 00:08:15.060 LINK spdk_top 00:08:15.060 CXX test/cpp_headers/dif.o 00:08:15.060 LINK rpc_client_test 00:08:15.060 CC test/app/stub/stub.o 00:08:15.060 CC app/fio/bdev/fio_plugin.o 00:08:15.060 CC examples/nvme/hello_world/hello_world.o 00:08:15.324 CC test/accel/dif/dif.o 00:08:15.324 CC examples/blob/hello_world/hello_blob.o 00:08:15.324 CXX test/cpp_headers/dma.o 00:08:15.324 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:15.324 LINK stub 00:08:15.324 CC test/blobfs/mkfs/mkfs.o 00:08:15.324 CXX test/cpp_headers/endian.o 00:08:15.324 CC examples/bdev/hello_world/hello_bdev.o 00:08:15.324 LINK hello_world 00:08:15.324 LINK memory_ut 00:08:15.324 CXX test/cpp_headers/env_dpdk.o 00:08:15.584 LINK hello_blob 00:08:15.584 LINK hello_fsdev 00:08:15.584 LINK mkfs 00:08:15.584 CXX test/cpp_headers/env.o 00:08:15.584 CXX test/cpp_headers/event.o 00:08:15.584 LINK spdk_bdev 00:08:15.584 LINK hello_bdev 00:08:15.584 CC examples/nvme/reconnect/reconnect.o 00:08:15.841 CXX test/cpp_headers/fd_group.o 00:08:15.841 CC examples/blob/cli/blobcli.o 00:08:15.841 CC test/lvol/esnap/esnap.o 00:08:15.841 LINK dif 00:08:15.841 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:15.841 CC examples/nvme/arbitration/arbitration.o 00:08:15.841 CXX test/cpp_headers/fd.o 00:08:15.841 CC examples/nvme/hotplug/hotplug.o 00:08:16.100 CC test/nvme/aer/aer.o 00:08:16.100 CC examples/bdev/bdevperf/bdevperf.o 00:08:16.100 CXX test/cpp_headers/file.o 00:08:16.100 LINK reconnect 00:08:16.100 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:16.100 LINK hotplug 00:08:16.358 CXX test/cpp_headers/fsdev.o 00:08:16.358 LINK arbitration 00:08:16.358 LINK aer 00:08:16.358 LINK blobcli 00:08:16.358 LINK nvme_manage 00:08:16.358 LINK cmb_copy 00:08:16.358 CXX test/cpp_headers/fsdev_module.o 00:08:16.358 CC test/bdev/bdevio/bdevio.o 00:08:16.616 CC examples/nvme/abort/abort.o 00:08:16.616 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:16.616 CXX test/cpp_headers/ftl.o 00:08:16.616 CC test/nvme/reset/reset.o 00:08:16.616 CXX test/cpp_headers/fuse_dispatcher.o 00:08:16.616 CC test/nvme/sgl/sgl.o 00:08:16.616 LINK pmr_persistence 00:08:16.874 CXX test/cpp_headers/gpt_spec.o 00:08:16.874 CXX test/cpp_headers/hexlify.o 00:08:16.874 CC test/nvme/e2edp/nvme_dp.o 00:08:16.874 LINK reset 00:08:16.874 LINK bdevperf 00:08:16.874 LINK bdevio 00:08:16.874 LINK sgl 00:08:16.874 CXX test/cpp_headers/histogram_data.o 00:08:16.874 LINK abort 00:08:17.133 CC test/nvme/overhead/overhead.o 00:08:17.133 CC test/nvme/err_injection/err_injection.o 00:08:17.133 LINK nvme_dp 00:08:17.133 CXX test/cpp_headers/idxd.o 00:08:17.133 CC test/nvme/startup/startup.o 00:08:17.133 CC test/nvme/reserve/reserve.o 00:08:17.133 CC test/nvme/simple_copy/simple_copy.o 00:08:17.133 CC test/nvme/connect_stress/connect_stress.o 00:08:17.392 LINK err_injection 00:08:17.392 CXX test/cpp_headers/idxd_spec.o 00:08:17.392 LINK startup 00:08:17.392 LINK overhead 00:08:17.392 CC test/nvme/boot_partition/boot_partition.o 00:08:17.392 CC examples/nvmf/nvmf/nvmf.o 00:08:17.392 LINK reserve 00:08:17.392 LINK connect_stress 00:08:17.392 LINK simple_copy 00:08:17.392 CXX test/cpp_headers/init.o 00:08:17.652 CC test/nvme/compliance/nvme_compliance.o 00:08:17.652 LINK boot_partition 00:08:17.652 CXX test/cpp_headers/ioat.o 00:08:17.652 CC test/nvme/fused_ordering/fused_ordering.o 00:08:17.652 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:17.652 CC test/nvme/fdp/fdp.o 00:08:17.652 CXX test/cpp_headers/ioat_spec.o 00:08:17.652 CC test/nvme/cuse/cuse.o 00:08:17.652 LINK nvmf 00:08:17.652 CXX test/cpp_headers/iscsi_spec.o 00:08:17.910 CXX test/cpp_headers/json.o 00:08:17.910 LINK fused_ordering 00:08:17.910 LINK doorbell_aers 00:08:17.910 CXX test/cpp_headers/jsonrpc.o 00:08:17.910 LINK nvme_compliance 00:08:17.910 CXX test/cpp_headers/keyring.o 00:08:17.910 CXX test/cpp_headers/keyring_module.o 00:08:17.910 CXX test/cpp_headers/likely.o 00:08:17.910 CXX test/cpp_headers/log.o 00:08:17.910 CXX test/cpp_headers/lvol.o 00:08:17.910 CXX test/cpp_headers/md5.o 00:08:18.169 CXX test/cpp_headers/memory.o 00:08:18.169 LINK fdp 00:08:18.169 CXX test/cpp_headers/mmio.o 00:08:18.169 CXX test/cpp_headers/nbd.o 00:08:18.169 CXX test/cpp_headers/net.o 00:08:18.169 CXX test/cpp_headers/notify.o 00:08:18.169 CXX test/cpp_headers/nvme.o 00:08:18.169 CXX test/cpp_headers/nvme_intel.o 00:08:18.169 CXX test/cpp_headers/nvme_ocssd.o 00:08:18.169 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:18.169 CXX test/cpp_headers/nvme_spec.o 00:08:18.444 CXX test/cpp_headers/nvme_zns.o 00:08:18.444 CXX test/cpp_headers/nvmf_cmd.o 00:08:18.444 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:18.444 CXX test/cpp_headers/nvmf.o 00:08:18.444 CXX test/cpp_headers/nvmf_spec.o 00:08:18.444 CXX test/cpp_headers/nvmf_transport.o 00:08:18.444 CXX test/cpp_headers/opal.o 00:08:18.444 CXX test/cpp_headers/opal_spec.o 00:08:18.444 CXX test/cpp_headers/pci_ids.o 00:08:18.444 CXX test/cpp_headers/pipe.o 00:08:18.444 CXX test/cpp_headers/queue.o 00:08:18.702 CXX test/cpp_headers/reduce.o 00:08:18.702 CXX test/cpp_headers/rpc.o 00:08:18.702 CXX test/cpp_headers/scheduler.o 00:08:18.702 CXX test/cpp_headers/scsi.o 00:08:18.702 CXX test/cpp_headers/scsi_spec.o 00:08:18.702 CXX test/cpp_headers/sock.o 00:08:18.702 CXX test/cpp_headers/stdinc.o 00:08:18.702 CXX test/cpp_headers/string.o 00:08:18.702 CXX test/cpp_headers/thread.o 00:08:18.702 CXX test/cpp_headers/trace.o 00:08:18.702 CXX test/cpp_headers/trace_parser.o 00:08:18.961 CXX test/cpp_headers/tree.o 00:08:18.961 CXX test/cpp_headers/ublk.o 00:08:18.961 CXX test/cpp_headers/util.o 00:08:18.961 CXX test/cpp_headers/uuid.o 00:08:18.961 CXX test/cpp_headers/version.o 00:08:18.961 CXX test/cpp_headers/vfio_user_pci.o 00:08:18.961 CXX test/cpp_headers/vfio_user_spec.o 00:08:18.961 CXX test/cpp_headers/vhost.o 00:08:18.961 CXX test/cpp_headers/vmd.o 00:08:18.961 CXX test/cpp_headers/xor.o 00:08:18.961 CXX test/cpp_headers/zipf.o 00:08:18.961 LINK cuse 00:08:21.494 LINK esnap 00:08:21.494 ************************************ 00:08:21.494 END TEST make 00:08:21.494 ************************************ 00:08:21.494 00:08:21.494 real 1m33.673s 00:08:21.494 user 8m27.511s 00:08:21.494 sys 1m42.659s 00:08:21.494 11:19:17 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:08:21.494 11:19:17 make -- common/autotest_common.sh@10 -- $ set +x 00:08:21.753 11:19:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:21.753 11:19:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:21.753 11:19:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:21.753 11:19:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:21.753 11:19:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:08:21.753 11:19:17 -- pm/common@44 -- $ pid=5242 00:08:21.753 11:19:17 -- pm/common@50 -- $ kill -TERM 5242 00:08:21.753 11:19:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:21.753 11:19:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:08:21.753 11:19:17 -- pm/common@44 -- $ pid=5243 00:08:21.753 11:19:17 -- pm/common@50 -- $ kill -TERM 5243 00:08:21.753 11:19:17 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:21.753 11:19:17 -- common/autotest_common.sh@1681 -- # lcov --version 00:08:21.753 11:19:17 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:21.753 11:19:17 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:21.753 11:19:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.753 11:19:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.753 11:19:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.753 11:19:17 -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.753 11:19:17 -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.753 11:19:17 -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.753 11:19:17 -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.753 11:19:17 -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.753 11:19:17 -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.753 11:19:17 -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.753 11:19:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.753 11:19:17 -- scripts/common.sh@344 -- # case "$op" in 00:08:21.753 11:19:17 -- scripts/common.sh@345 -- # : 1 00:08:21.753 11:19:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.754 11:19:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.754 11:19:17 -- scripts/common.sh@365 -- # decimal 1 00:08:21.754 11:19:17 -- scripts/common.sh@353 -- # local d=1 00:08:21.754 11:19:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.754 11:19:17 -- scripts/common.sh@355 -- # echo 1 00:08:21.754 11:19:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.754 11:19:17 -- scripts/common.sh@366 -- # decimal 2 00:08:21.754 11:19:17 -- scripts/common.sh@353 -- # local d=2 00:08:21.754 11:19:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.754 11:19:17 -- scripts/common.sh@355 -- # echo 2 00:08:21.754 11:19:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.754 11:19:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.754 11:19:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.754 11:19:17 -- scripts/common.sh@368 -- # return 0 00:08:21.754 11:19:17 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.754 11:19:17 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:21.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.754 --rc genhtml_branch_coverage=1 00:08:21.754 --rc genhtml_function_coverage=1 00:08:21.754 --rc genhtml_legend=1 00:08:21.754 --rc geninfo_all_blocks=1 00:08:21.754 --rc geninfo_unexecuted_blocks=1 00:08:21.754 00:08:21.754 ' 00:08:21.754 11:19:17 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:21.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.754 --rc genhtml_branch_coverage=1 00:08:21.754 --rc genhtml_function_coverage=1 00:08:21.754 --rc genhtml_legend=1 00:08:21.754 --rc geninfo_all_blocks=1 00:08:21.754 --rc geninfo_unexecuted_blocks=1 00:08:21.754 00:08:21.754 ' 00:08:21.754 11:19:17 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:21.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.754 --rc genhtml_branch_coverage=1 00:08:21.754 --rc genhtml_function_coverage=1 00:08:21.754 --rc genhtml_legend=1 00:08:21.754 --rc geninfo_all_blocks=1 00:08:21.754 --rc geninfo_unexecuted_blocks=1 00:08:21.754 00:08:21.754 ' 00:08:21.754 11:19:17 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:21.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.754 --rc genhtml_branch_coverage=1 00:08:21.754 --rc genhtml_function_coverage=1 00:08:21.754 --rc genhtml_legend=1 00:08:21.754 --rc geninfo_all_blocks=1 00:08:21.754 --rc geninfo_unexecuted_blocks=1 00:08:21.754 00:08:21.754 ' 00:08:21.754 11:19:17 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:21.754 11:19:17 -- nvmf/common.sh@7 -- # uname -s 00:08:21.754 11:19:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.754 11:19:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.754 11:19:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.754 11:19:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.754 11:19:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.754 11:19:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.754 11:19:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.754 11:19:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.754 11:19:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.754 11:19:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.754 11:19:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:08:21.754 11:19:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:08:21.754 11:19:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.754 11:19:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.754 11:19:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:21.754 11:19:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.754 11:19:17 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:21.754 11:19:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.012 11:19:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.012 11:19:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.012 11:19:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.012 11:19:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.012 11:19:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.012 11:19:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.012 11:19:17 -- paths/export.sh@5 -- # export PATH 00:08:22.012 11:19:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.012 11:19:17 -- nvmf/common.sh@51 -- # : 0 00:08:22.012 11:19:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:22.012 11:19:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:22.012 11:19:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.012 11:19:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.012 11:19:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.012 11:19:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:22.012 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:22.012 11:19:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:22.012 11:19:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:22.012 11:19:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:22.012 11:19:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:22.012 11:19:17 -- spdk/autotest.sh@32 -- # uname -s 00:08:22.012 11:19:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:22.012 11:19:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:22.012 11:19:17 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:22.012 11:19:17 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:08:22.012 11:19:17 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:22.012 11:19:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:22.012 11:19:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:22.012 11:19:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:22.012 11:19:17 -- spdk/autotest.sh@48 -- # udevadm_pid=54366 00:08:22.012 11:19:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:22.012 11:19:17 -- pm/common@17 -- # local monitor 00:08:22.012 11:19:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:22.012 11:19:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:22.012 11:19:17 -- pm/common@25 -- # sleep 1 00:08:22.012 11:19:17 -- pm/common@21 -- # date +%s 00:08:22.012 11:19:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:22.012 11:19:17 -- pm/common@21 -- # date +%s 00:08:22.012 11:19:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728299957 00:08:22.012 11:19:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728299957 00:08:22.012 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728299957_collect-vmstat.pm.log 00:08:22.012 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728299957_collect-cpu-load.pm.log 00:08:22.948 11:19:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:22.948 11:19:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:22.948 11:19:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:22.948 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:08:22.948 11:19:18 -- spdk/autotest.sh@59 -- # create_test_list 00:08:22.948 11:19:18 -- common/autotest_common.sh@748 -- # xtrace_disable 00:08:22.948 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:08:22.948 11:19:18 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:22.948 11:19:18 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:22.948 11:19:18 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:22.948 11:19:18 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:22.948 11:19:18 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:22.948 11:19:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:22.948 11:19:18 -- common/autotest_common.sh@1455 -- # uname 00:08:22.948 11:19:18 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:08:22.948 11:19:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:22.948 11:19:18 -- common/autotest_common.sh@1475 -- # uname 00:08:22.948 11:19:18 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:08:22.948 11:19:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:22.948 11:19:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:23.207 lcov: LCOV version 1.15 00:08:23.207 11:19:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:41.319 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:41.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:56.220 11:19:51 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:56.220 11:19:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:56.220 11:19:51 -- common/autotest_common.sh@10 -- # set +x 00:08:56.220 11:19:51 -- spdk/autotest.sh@78 -- # rm -f 00:08:56.220 11:19:51 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:56.478 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:56.735 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:56.735 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:56.735 11:19:52 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:56.735 11:19:52 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:08:56.735 11:19:52 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:08:56.735 11:19:52 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:08:56.735 11:19:52 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:56.735 11:19:52 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:08:56.735 11:19:52 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:08:56.735 11:19:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:56.735 11:19:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:56.735 11:19:52 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:56.735 11:19:52 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:08:56.735 11:19:52 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:08:56.735 11:19:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:56.735 11:19:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:56.735 11:19:52 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:56.735 11:19:52 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:08:56.735 11:19:52 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:08:56.735 11:19:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:56.735 11:19:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:56.735 11:19:52 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:56.735 11:19:52 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:08:56.735 11:19:52 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:08:56.735 11:19:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:56.735 11:19:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:56.735 11:19:52 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:56.735 11:19:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:56.735 11:19:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:56.735 11:19:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:56.735 11:19:52 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:56.735 11:19:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:56.735 No valid GPT data, bailing 00:08:56.735 11:19:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:56.735 11:19:52 -- scripts/common.sh@394 -- # pt= 00:08:56.735 11:19:52 -- scripts/common.sh@395 -- # return 1 00:08:56.735 11:19:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:56.735 1+0 records in 00:08:56.735 1+0 records out 00:08:56.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00372947 s, 281 MB/s 00:08:56.735 11:19:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:56.735 11:19:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:56.735 11:19:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:08:56.735 11:19:52 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:08:56.735 11:19:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:56.735 No valid GPT data, bailing 00:08:56.735 11:19:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:56.735 11:19:52 -- scripts/common.sh@394 -- # pt= 00:08:56.735 11:19:52 -- scripts/common.sh@395 -- # return 1 00:08:56.736 11:19:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:56.736 1+0 records in 00:08:56.736 1+0 records out 00:08:56.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00462929 s, 227 MB/s 00:08:56.736 11:19:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:56.736 11:19:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:56.736 11:19:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:08:56.736 11:19:52 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:08:56.736 11:19:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:08:56.994 No valid GPT data, bailing 00:08:56.994 11:19:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:56.994 11:19:52 -- scripts/common.sh@394 -- # pt= 00:08:56.994 11:19:52 -- scripts/common.sh@395 -- # return 1 00:08:56.994 11:19:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:08:56.994 1+0 records in 00:08:56.994 1+0 records out 00:08:56.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00471575 s, 222 MB/s 00:08:56.994 11:19:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:56.994 11:19:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:56.994 11:19:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:08:56.994 11:19:52 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:08:56.994 11:19:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:08:56.994 No valid GPT data, bailing 00:08:56.994 11:19:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:56.994 11:19:52 -- scripts/common.sh@394 -- # pt= 00:08:56.994 11:19:52 -- scripts/common.sh@395 -- # return 1 00:08:56.994 11:19:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:08:56.994 1+0 records in 00:08:56.994 1+0 records out 00:08:56.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00352972 s, 297 MB/s 00:08:56.994 11:19:52 -- spdk/autotest.sh@105 -- # sync 00:08:56.994 11:19:52 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:56.994 11:19:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:56.994 11:19:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:58.893 11:19:54 -- spdk/autotest.sh@111 -- # uname -s 00:08:58.893 11:19:54 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:58.893 11:19:54 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:58.893 11:19:54 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:59.828 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:59.828 Hugepages 00:08:59.828 node hugesize free / total 00:08:59.828 node0 1048576kB 0 / 0 00:08:59.828 node0 2048kB 0 / 0 00:08:59.828 00:08:59.828 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:59.828 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:59.828 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:59.828 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:59.828 11:19:55 -- spdk/autotest.sh@117 -- # uname -s 00:08:59.829 11:19:55 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:59.829 11:19:55 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:59.829 11:19:55 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:00.395 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:00.683 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:00.683 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:00.683 11:19:56 -- common/autotest_common.sh@1515 -- # sleep 1 00:09:01.620 11:19:57 -- common/autotest_common.sh@1516 -- # bdfs=() 00:09:01.620 11:19:57 -- common/autotest_common.sh@1516 -- # local bdfs 00:09:01.620 11:19:57 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:09:01.620 11:19:57 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:09:01.620 11:19:57 -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:01.620 11:19:57 -- common/autotest_common.sh@1496 -- # local bdfs 00:09:01.620 11:19:57 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:01.620 11:19:57 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:01.620 11:19:57 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:01.620 11:19:57 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:09:01.620 11:19:57 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:01.620 11:19:57 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:02.187 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:02.187 Waiting for block devices as requested 00:09:02.187 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:02.187 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:02.187 11:19:57 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:09:02.187 11:19:57 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:09:02.187 11:19:57 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:09:02.187 11:19:57 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:09:02.187 11:19:57 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:02.187 11:19:57 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:09:02.187 11:19:57 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:02.187 11:19:57 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:09:02.187 11:19:57 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:09:02.187 11:19:57 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:09:02.187 11:19:57 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:09:02.187 11:19:57 -- common/autotest_common.sh@1529 -- # grep oacs 00:09:02.187 11:19:57 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:09:02.187 11:19:57 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:09:02.187 11:19:57 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:09:02.187 11:19:57 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:09:02.187 11:19:57 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:09:02.187 11:19:57 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:09:02.187 11:19:57 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:09:02.446 11:19:57 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:09:02.446 11:19:57 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:09:02.446 11:19:57 -- common/autotest_common.sh@1541 -- # continue 00:09:02.446 11:19:57 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:09:02.446 11:19:57 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:09:02.446 11:19:57 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:09:02.446 11:19:57 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:09:02.446 11:19:57 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:02.446 11:19:57 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:09:02.446 11:19:57 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:02.446 11:19:57 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:09:02.446 11:19:57 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:09:02.446 11:19:57 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:09:02.446 11:19:57 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:09:02.446 11:19:57 -- common/autotest_common.sh@1529 -- # grep oacs 00:09:02.446 11:19:57 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:09:02.446 11:19:57 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:09:02.446 11:19:57 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:09:02.446 11:19:57 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:09:02.446 11:19:57 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:09:02.446 11:19:57 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:09:02.446 11:19:57 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:09:02.446 11:19:57 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:09:02.446 11:19:57 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:09:02.446 11:19:57 -- common/autotest_common.sh@1541 -- # continue 00:09:02.446 11:19:57 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:02.446 11:19:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:02.446 11:19:57 -- common/autotest_common.sh@10 -- # set +x 00:09:02.446 11:19:57 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:02.446 11:19:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:02.446 11:19:57 -- common/autotest_common.sh@10 -- # set +x 00:09:02.446 11:19:57 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:03.016 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:03.016 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.298 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.298 11:19:58 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:09:03.298 11:19:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:03.298 11:19:58 -- common/autotest_common.sh@10 -- # set +x 00:09:03.298 11:19:58 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:09:03.298 11:19:58 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:09:03.298 11:19:58 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:09:03.298 11:19:58 -- common/autotest_common.sh@1561 -- # bdfs=() 00:09:03.298 11:19:58 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:09:03.298 11:19:58 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:09:03.298 11:19:58 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:09:03.298 11:19:58 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:09:03.298 11:19:58 -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:03.298 11:19:58 -- common/autotest_common.sh@1496 -- # local bdfs 00:09:03.298 11:19:58 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:03.298 11:19:58 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:03.298 11:19:58 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:03.298 11:19:58 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:09:03.298 11:19:58 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:03.298 11:19:58 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:09:03.298 11:19:58 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:09:03.298 11:19:58 -- common/autotest_common.sh@1564 -- # device=0x0010 00:09:03.298 11:19:58 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:03.298 11:19:58 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:09:03.298 11:19:58 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:09:03.298 11:19:58 -- common/autotest_common.sh@1564 -- # device=0x0010 00:09:03.298 11:19:58 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:03.298 11:19:58 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:09:03.298 11:19:58 -- common/autotest_common.sh@1570 -- # return 0 00:09:03.298 11:19:58 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:09:03.298 11:19:58 -- common/autotest_common.sh@1578 -- # return 0 00:09:03.298 11:19:58 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:09:03.298 11:19:58 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:09:03.298 11:19:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:03.298 11:19:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:03.298 11:19:58 -- spdk/autotest.sh@149 -- # timing_enter lib 00:09:03.298 11:19:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:03.298 11:19:58 -- common/autotest_common.sh@10 -- # set +x 00:09:03.298 11:19:58 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:09:03.298 11:19:58 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:09:03.298 11:19:58 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:09:03.298 11:19:58 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:03.298 11:19:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:03.298 11:19:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.298 11:19:58 -- common/autotest_common.sh@10 -- # set +x 00:09:03.298 ************************************ 00:09:03.298 START TEST env 00:09:03.298 ************************************ 00:09:03.298 11:19:58 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:03.557 * Looking for test storage... 00:09:03.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:03.557 11:19:58 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:03.557 11:19:58 env -- common/autotest_common.sh@1681 -- # lcov --version 00:09:03.557 11:19:58 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:03.557 11:19:58 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:03.557 11:19:58 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.557 11:19:58 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.557 11:19:58 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.557 11:19:58 env -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.557 11:19:58 env -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.557 11:19:58 env -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.557 11:19:58 env -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.557 11:19:58 env -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.557 11:19:58 env -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.557 11:19:58 env -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.557 11:19:58 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.557 11:19:58 env -- scripts/common.sh@344 -- # case "$op" in 00:09:03.557 11:19:58 env -- scripts/common.sh@345 -- # : 1 00:09:03.557 11:19:58 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.557 11:19:58 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.557 11:19:58 env -- scripts/common.sh@365 -- # decimal 1 00:09:03.557 11:19:58 env -- scripts/common.sh@353 -- # local d=1 00:09:03.557 11:19:58 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.557 11:19:58 env -- scripts/common.sh@355 -- # echo 1 00:09:03.557 11:19:58 env -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.557 11:19:58 env -- scripts/common.sh@366 -- # decimal 2 00:09:03.557 11:19:58 env -- scripts/common.sh@353 -- # local d=2 00:09:03.557 11:19:58 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.557 11:19:58 env -- scripts/common.sh@355 -- # echo 2 00:09:03.557 11:19:58 env -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.557 11:19:58 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.557 11:19:58 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.557 11:19:58 env -- scripts/common.sh@368 -- # return 0 00:09:03.557 11:19:58 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.557 11:19:58 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:03.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.557 --rc genhtml_branch_coverage=1 00:09:03.557 --rc genhtml_function_coverage=1 00:09:03.557 --rc genhtml_legend=1 00:09:03.557 --rc geninfo_all_blocks=1 00:09:03.557 --rc geninfo_unexecuted_blocks=1 00:09:03.557 00:09:03.557 ' 00:09:03.557 11:19:58 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:03.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.557 --rc genhtml_branch_coverage=1 00:09:03.557 --rc genhtml_function_coverage=1 00:09:03.557 --rc genhtml_legend=1 00:09:03.557 --rc geninfo_all_blocks=1 00:09:03.557 --rc geninfo_unexecuted_blocks=1 00:09:03.557 00:09:03.557 ' 00:09:03.557 11:19:58 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:03.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.557 --rc genhtml_branch_coverage=1 00:09:03.557 --rc genhtml_function_coverage=1 00:09:03.557 --rc genhtml_legend=1 00:09:03.557 --rc geninfo_all_blocks=1 00:09:03.557 --rc geninfo_unexecuted_blocks=1 00:09:03.557 00:09:03.557 ' 00:09:03.557 11:19:58 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:03.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.557 --rc genhtml_branch_coverage=1 00:09:03.557 --rc genhtml_function_coverage=1 00:09:03.557 --rc genhtml_legend=1 00:09:03.557 --rc geninfo_all_blocks=1 00:09:03.557 --rc geninfo_unexecuted_blocks=1 00:09:03.557 00:09:03.557 ' 00:09:03.557 11:19:58 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:03.557 11:19:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:03.557 11:19:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.557 11:19:58 env -- common/autotest_common.sh@10 -- # set +x 00:09:03.557 ************************************ 00:09:03.557 START TEST env_memory 00:09:03.557 ************************************ 00:09:03.557 11:19:58 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:03.557 00:09:03.557 00:09:03.557 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.557 http://cunit.sourceforge.net/ 00:09:03.557 00:09:03.557 00:09:03.557 Suite: memory 00:09:03.557 Test: alloc and free memory map ...[2024-10-07 11:19:58.989698] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:03.557 passed 00:09:03.557 Test: mem map translation ...[2024-10-07 11:19:59.021358] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:03.557 [2024-10-07 11:19:59.021626] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:03.557 [2024-10-07 11:19:59.021774] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:03.557 [2024-10-07 11:19:59.021862] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:03.557 passed 00:09:03.816 Test: mem map registration ...[2024-10-07 11:19:59.085984] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:09:03.816 [2024-10-07 11:19:59.086234] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:09:03.816 passed 00:09:03.816 Test: mem map adjacent registrations ...passed 00:09:03.816 00:09:03.816 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.816 suites 1 1 n/a 0 0 00:09:03.816 tests 4 4 4 0 0 00:09:03.816 asserts 152 152 152 0 n/a 00:09:03.816 00:09:03.816 Elapsed time = 0.214 seconds 00:09:03.816 00:09:03.816 real 0m0.233s 00:09:03.816 user 0m0.215s 00:09:03.816 sys 0m0.013s 00:09:03.816 11:19:59 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.816 11:19:59 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:03.816 ************************************ 00:09:03.816 END TEST env_memory 00:09:03.816 ************************************ 00:09:03.816 11:19:59 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:03.816 11:19:59 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:03.816 11:19:59 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.816 11:19:59 env -- common/autotest_common.sh@10 -- # set +x 00:09:03.816 ************************************ 00:09:03.816 START TEST env_vtophys 00:09:03.816 ************************************ 00:09:03.816 11:19:59 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:03.816 EAL: lib.eal log level changed from notice to debug 00:09:03.816 EAL: Detected lcore 0 as core 0 on socket 0 00:09:03.816 EAL: Detected lcore 1 as core 0 on socket 0 00:09:03.816 EAL: Detected lcore 2 as core 0 on socket 0 00:09:03.816 EAL: Detected lcore 3 as core 0 on socket 0 00:09:03.816 EAL: Detected lcore 4 as core 0 on socket 0 00:09:03.816 EAL: Detected lcore 5 as core 0 on socket 0 00:09:03.816 EAL: Detected lcore 6 as core 0 on socket 0 00:09:03.816 EAL: Detected lcore 7 as core 0 on socket 0 00:09:03.816 EAL: Detected lcore 8 as core 0 on socket 0 00:09:03.816 EAL: Detected lcore 9 as core 0 on socket 0 00:09:03.816 EAL: Maximum logical cores by configuration: 128 00:09:03.816 EAL: Detected CPU lcores: 10 00:09:03.816 EAL: Detected NUMA nodes: 1 00:09:03.816 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:03.816 EAL: Detected shared linkage of DPDK 00:09:03.816 EAL: No shared files mode enabled, IPC will be disabled 00:09:03.816 EAL: Selected IOVA mode 'PA' 00:09:03.816 EAL: Probing VFIO support... 00:09:03.816 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:03.816 EAL: VFIO modules not loaded, skipping VFIO support... 00:09:03.816 EAL: Ask a virtual area of 0x2e000 bytes 00:09:03.816 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:03.816 EAL: Setting up physically contiguous memory... 00:09:03.816 EAL: Setting maximum number of open files to 524288 00:09:03.816 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:03.816 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:03.816 EAL: Ask a virtual area of 0x61000 bytes 00:09:03.816 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:03.816 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:03.816 EAL: Ask a virtual area of 0x400000000 bytes 00:09:03.816 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:03.816 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:03.816 EAL: Ask a virtual area of 0x61000 bytes 00:09:03.816 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:03.816 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:03.816 EAL: Ask a virtual area of 0x400000000 bytes 00:09:03.816 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:03.816 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:03.816 EAL: Ask a virtual area of 0x61000 bytes 00:09:03.816 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:03.816 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:03.816 EAL: Ask a virtual area of 0x400000000 bytes 00:09:03.816 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:03.816 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:03.816 EAL: Ask a virtual area of 0x61000 bytes 00:09:03.816 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:03.816 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:03.816 EAL: Ask a virtual area of 0x400000000 bytes 00:09:03.816 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:03.816 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:03.816 EAL: Hugepages will be freed exactly as allocated. 00:09:03.816 EAL: No shared files mode enabled, IPC is disabled 00:09:03.816 EAL: No shared files mode enabled, IPC is disabled 00:09:04.074 EAL: TSC frequency is ~2200000 KHz 00:09:04.074 EAL: Main lcore 0 is ready (tid=7f1c5e714a00;cpuset=[0]) 00:09:04.074 EAL: Trying to obtain current memory policy. 00:09:04.074 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.074 EAL: Restoring previous memory policy: 0 00:09:04.074 EAL: request: mp_malloc_sync 00:09:04.074 EAL: No shared files mode enabled, IPC is disabled 00:09:04.074 EAL: Heap on socket 0 was expanded by 2MB 00:09:04.074 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:04.074 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:04.074 EAL: Mem event callback 'spdk:(nil)' registered 00:09:04.074 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:09:04.074 00:09:04.074 00:09:04.074 CUnit - A unit testing framework for C - Version 2.1-3 00:09:04.074 http://cunit.sourceforge.net/ 00:09:04.074 00:09:04.074 00:09:04.074 Suite: components_suite 00:09:04.074 Test: vtophys_malloc_test ...passed 00:09:04.074 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:04.074 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.074 EAL: Restoring previous memory policy: 4 00:09:04.074 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.074 EAL: request: mp_malloc_sync 00:09:04.074 EAL: No shared files mode enabled, IPC is disabled 00:09:04.074 EAL: Heap on socket 0 was expanded by 4MB 00:09:04.074 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.074 EAL: request: mp_malloc_sync 00:09:04.074 EAL: No shared files mode enabled, IPC is disabled 00:09:04.074 EAL: Heap on socket 0 was shrunk by 4MB 00:09:04.074 EAL: Trying to obtain current memory policy. 00:09:04.074 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.074 EAL: Restoring previous memory policy: 4 00:09:04.074 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.074 EAL: request: mp_malloc_sync 00:09:04.074 EAL: No shared files mode enabled, IPC is disabled 00:09:04.074 EAL: Heap on socket 0 was expanded by 6MB 00:09:04.074 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.074 EAL: request: mp_malloc_sync 00:09:04.074 EAL: No shared files mode enabled, IPC is disabled 00:09:04.074 EAL: Heap on socket 0 was shrunk by 6MB 00:09:04.074 EAL: Trying to obtain current memory policy. 00:09:04.074 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.074 EAL: Restoring previous memory policy: 4 00:09:04.074 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.074 EAL: request: mp_malloc_sync 00:09:04.074 EAL: No shared files mode enabled, IPC is disabled 00:09:04.074 EAL: Heap on socket 0 was expanded by 10MB 00:09:04.074 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.074 EAL: request: mp_malloc_sync 00:09:04.074 EAL: No shared files mode enabled, IPC is disabled 00:09:04.074 EAL: Heap on socket 0 was shrunk by 10MB 00:09:04.074 EAL: Trying to obtain current memory policy. 00:09:04.074 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.074 EAL: Restoring previous memory policy: 4 00:09:04.074 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.074 EAL: request: mp_malloc_sync 00:09:04.074 EAL: No shared files mode enabled, IPC is disabled 00:09:04.074 EAL: Heap on socket 0 was expanded by 18MB 00:09:04.075 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.075 EAL: request: mp_malloc_sync 00:09:04.075 EAL: No shared files mode enabled, IPC is disabled 00:09:04.075 EAL: Heap on socket 0 was shrunk by 18MB 00:09:04.075 EAL: Trying to obtain current memory policy. 00:09:04.075 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.075 EAL: Restoring previous memory policy: 4 00:09:04.075 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.075 EAL: request: mp_malloc_sync 00:09:04.075 EAL: No shared files mode enabled, IPC is disabled 00:09:04.075 EAL: Heap on socket 0 was expanded by 34MB 00:09:04.075 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.075 EAL: request: mp_malloc_sync 00:09:04.075 EAL: No shared files mode enabled, IPC is disabled 00:09:04.075 EAL: Heap on socket 0 was shrunk by 34MB 00:09:04.075 EAL: Trying to obtain current memory policy. 00:09:04.075 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.075 EAL: Restoring previous memory policy: 4 00:09:04.075 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.075 EAL: request: mp_malloc_sync 00:09:04.075 EAL: No shared files mode enabled, IPC is disabled 00:09:04.075 EAL: Heap on socket 0 was expanded by 66MB 00:09:04.075 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.075 EAL: request: mp_malloc_sync 00:09:04.075 EAL: No shared files mode enabled, IPC is disabled 00:09:04.075 EAL: Heap on socket 0 was shrunk by 66MB 00:09:04.075 EAL: Trying to obtain current memory policy. 00:09:04.075 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.075 EAL: Restoring previous memory policy: 4 00:09:04.075 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.075 EAL: request: mp_malloc_sync 00:09:04.075 EAL: No shared files mode enabled, IPC is disabled 00:09:04.075 EAL: Heap on socket 0 was expanded by 130MB 00:09:04.075 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.075 EAL: request: mp_malloc_sync 00:09:04.075 EAL: No shared files mode enabled, IPC is disabled 00:09:04.075 EAL: Heap on socket 0 was shrunk by 130MB 00:09:04.075 EAL: Trying to obtain current memory policy. 00:09:04.075 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.075 EAL: Restoring previous memory policy: 4 00:09:04.075 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.075 EAL: request: mp_malloc_sync 00:09:04.075 EAL: No shared files mode enabled, IPC is disabled 00:09:04.075 EAL: Heap on socket 0 was expanded by 258MB 00:09:04.333 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.333 EAL: request: mp_malloc_sync 00:09:04.333 EAL: No shared files mode enabled, IPC is disabled 00:09:04.333 EAL: Heap on socket 0 was shrunk by 258MB 00:09:04.333 EAL: Trying to obtain current memory policy. 00:09:04.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.333 EAL: Restoring previous memory policy: 4 00:09:04.333 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.333 EAL: request: mp_malloc_sync 00:09:04.333 EAL: No shared files mode enabled, IPC is disabled 00:09:04.333 EAL: Heap on socket 0 was expanded by 514MB 00:09:04.591 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.591 EAL: request: mp_malloc_sync 00:09:04.591 EAL: No shared files mode enabled, IPC is disabled 00:09:04.591 EAL: Heap on socket 0 was shrunk by 514MB 00:09:04.591 EAL: Trying to obtain current memory policy. 00:09:04.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.849 EAL: Restoring previous memory policy: 4 00:09:04.849 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.849 EAL: request: mp_malloc_sync 00:09:04.849 EAL: No shared files mode enabled, IPC is disabled 00:09:04.849 EAL: Heap on socket 0 was expanded by 1026MB 00:09:05.107 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.365 passed 00:09:05.365 00:09:05.365 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.365 suites 1 1 n/a 0 0 00:09:05.365 tests 2 2 2 0 0 00:09:05.365 asserts 5484 5484 5484 0 n/a 00:09:05.365 00:09:05.365 Elapsed time = 1.234 seconds 00:09:05.365 EAL: request: mp_malloc_sync 00:09:05.365 EAL: No shared files mode enabled, IPC is disabled 00:09:05.365 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:05.365 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.365 EAL: request: mp_malloc_sync 00:09:05.365 EAL: No shared files mode enabled, IPC is disabled 00:09:05.365 EAL: Heap on socket 0 was shrunk by 2MB 00:09:05.365 EAL: No shared files mode enabled, IPC is disabled 00:09:05.365 EAL: No shared files mode enabled, IPC is disabled 00:09:05.365 EAL: No shared files mode enabled, IPC is disabled 00:09:05.365 00:09:05.365 real 0m1.434s 00:09:05.365 user 0m0.780s 00:09:05.365 sys 0m0.521s 00:09:05.365 11:20:00 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.365 11:20:00 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:05.365 ************************************ 00:09:05.365 END TEST env_vtophys 00:09:05.365 ************************************ 00:09:05.365 11:20:00 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:05.365 11:20:00 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:05.365 11:20:00 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.365 11:20:00 env -- common/autotest_common.sh@10 -- # set +x 00:09:05.365 ************************************ 00:09:05.365 START TEST env_pci 00:09:05.365 ************************************ 00:09:05.365 11:20:00 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:05.365 00:09:05.365 00:09:05.365 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.365 http://cunit.sourceforge.net/ 00:09:05.365 00:09:05.365 00:09:05.365 Suite: pci 00:09:05.365 Test: pci_hook ...[2024-10-07 11:20:00.717909] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56614 has claimed it 00:09:05.365 passed 00:09:05.365 00:09:05.365 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.365 suites 1 1 n/a 0 0 00:09:05.365 tests 1 1 1 0 0 00:09:05.365 asserts 25 25 25 0 n/a 00:09:05.365 00:09:05.365 Elapsed time = 0.002 seconds 00:09:05.365 EAL: Cannot find device (10000:00:01.0) 00:09:05.365 EAL: Failed to attach device on primary process 00:09:05.365 00:09:05.365 real 0m0.018s 00:09:05.365 user 0m0.009s 00:09:05.365 sys 0m0.008s 00:09:05.365 11:20:00 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.365 11:20:00 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:05.365 ************************************ 00:09:05.365 END TEST env_pci 00:09:05.365 ************************************ 00:09:05.365 11:20:00 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:05.365 11:20:00 env -- env/env.sh@15 -- # uname 00:09:05.365 11:20:00 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:05.365 11:20:00 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:05.365 11:20:00 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:05.365 11:20:00 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:05.365 11:20:00 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.365 11:20:00 env -- common/autotest_common.sh@10 -- # set +x 00:09:05.365 ************************************ 00:09:05.365 START TEST env_dpdk_post_init 00:09:05.365 ************************************ 00:09:05.365 11:20:00 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:05.365 EAL: Detected CPU lcores: 10 00:09:05.365 EAL: Detected NUMA nodes: 1 00:09:05.365 EAL: Detected shared linkage of DPDK 00:09:05.365 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:05.365 EAL: Selected IOVA mode 'PA' 00:09:05.623 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:05.623 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:05.623 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:09:05.623 Starting DPDK initialization... 00:09:05.623 Starting SPDK post initialization... 00:09:05.623 SPDK NVMe probe 00:09:05.623 Attaching to 0000:00:10.0 00:09:05.623 Attaching to 0000:00:11.0 00:09:05.623 Attached to 0000:00:10.0 00:09:05.623 Attached to 0000:00:11.0 00:09:05.623 Cleaning up... 00:09:05.623 00:09:05.623 real 0m0.201s 00:09:05.623 user 0m0.055s 00:09:05.623 sys 0m0.045s 00:09:05.623 11:20:00 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.623 11:20:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:05.623 ************************************ 00:09:05.623 END TEST env_dpdk_post_init 00:09:05.623 ************************************ 00:09:05.623 11:20:01 env -- env/env.sh@26 -- # uname 00:09:05.623 11:20:01 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:05.623 11:20:01 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:05.623 11:20:01 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:05.623 11:20:01 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.623 11:20:01 env -- common/autotest_common.sh@10 -- # set +x 00:09:05.623 ************************************ 00:09:05.623 START TEST env_mem_callbacks 00:09:05.623 ************************************ 00:09:05.623 11:20:01 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:05.623 EAL: Detected CPU lcores: 10 00:09:05.623 EAL: Detected NUMA nodes: 1 00:09:05.623 EAL: Detected shared linkage of DPDK 00:09:05.623 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:05.623 EAL: Selected IOVA mode 'PA' 00:09:05.880 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:05.880 00:09:05.880 00:09:05.880 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.880 http://cunit.sourceforge.net/ 00:09:05.880 00:09:05.880 00:09:05.880 Suite: memory 00:09:05.880 Test: test ... 00:09:05.880 register 0x200000200000 2097152 00:09:05.880 malloc 3145728 00:09:05.880 register 0x200000400000 4194304 00:09:05.880 buf 0x200000500000 len 3145728 PASSED 00:09:05.880 malloc 64 00:09:05.880 buf 0x2000004fff40 len 64 PASSED 00:09:05.880 malloc 4194304 00:09:05.880 register 0x200000800000 6291456 00:09:05.880 buf 0x200000a00000 len 4194304 PASSED 00:09:05.880 free 0x200000500000 3145728 00:09:05.880 free 0x2000004fff40 64 00:09:05.880 unregister 0x200000400000 4194304 PASSED 00:09:05.880 free 0x200000a00000 4194304 00:09:05.880 unregister 0x200000800000 6291456 PASSED 00:09:05.880 malloc 8388608 00:09:05.880 register 0x200000400000 10485760 00:09:05.880 buf 0x200000600000 len 8388608 PASSED 00:09:05.880 free 0x200000600000 8388608 00:09:05.880 unregister 0x200000400000 10485760 PASSED 00:09:05.880 passed 00:09:05.880 00:09:05.880 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.880 suites 1 1 n/a 0 0 00:09:05.880 tests 1 1 1 0 0 00:09:05.880 asserts 15 15 15 0 n/a 00:09:05.880 00:09:05.880 Elapsed time = 0.009 seconds 00:09:05.880 00:09:05.880 real 0m0.146s 00:09:05.880 user 0m0.021s 00:09:05.880 sys 0m0.025s 00:09:05.880 11:20:01 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.880 11:20:01 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:05.880 ************************************ 00:09:05.880 END TEST env_mem_callbacks 00:09:05.880 ************************************ 00:09:05.880 00:09:05.880 real 0m2.469s 00:09:05.880 user 0m1.262s 00:09:05.880 sys 0m0.861s 00:09:05.880 11:20:01 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.880 ************************************ 00:09:05.880 END TEST env 00:09:05.880 ************************************ 00:09:05.880 11:20:01 env -- common/autotest_common.sh@10 -- # set +x 00:09:05.880 11:20:01 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:05.880 11:20:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:05.880 11:20:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.880 11:20:01 -- common/autotest_common.sh@10 -- # set +x 00:09:05.880 ************************************ 00:09:05.880 START TEST rpc 00:09:05.880 ************************************ 00:09:05.880 11:20:01 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:05.880 * Looking for test storage... 00:09:05.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:05.881 11:20:01 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:05.881 11:20:01 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:05.881 11:20:01 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:06.138 11:20:01 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:06.138 11:20:01 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.138 11:20:01 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.138 11:20:01 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.138 11:20:01 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.138 11:20:01 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.138 11:20:01 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.138 11:20:01 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.138 11:20:01 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.138 11:20:01 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.139 11:20:01 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.139 11:20:01 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.139 11:20:01 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:06.139 11:20:01 rpc -- scripts/common.sh@345 -- # : 1 00:09:06.139 11:20:01 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.139 11:20:01 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.139 11:20:01 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:06.139 11:20:01 rpc -- scripts/common.sh@353 -- # local d=1 00:09:06.139 11:20:01 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.139 11:20:01 rpc -- scripts/common.sh@355 -- # echo 1 00:09:06.139 11:20:01 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.139 11:20:01 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:06.139 11:20:01 rpc -- scripts/common.sh@353 -- # local d=2 00:09:06.139 11:20:01 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.139 11:20:01 rpc -- scripts/common.sh@355 -- # echo 2 00:09:06.139 11:20:01 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.139 11:20:01 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.139 11:20:01 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.139 11:20:01 rpc -- scripts/common.sh@368 -- # return 0 00:09:06.139 11:20:01 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.139 11:20:01 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:06.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.139 --rc genhtml_branch_coverage=1 00:09:06.139 --rc genhtml_function_coverage=1 00:09:06.139 --rc genhtml_legend=1 00:09:06.139 --rc geninfo_all_blocks=1 00:09:06.139 --rc geninfo_unexecuted_blocks=1 00:09:06.139 00:09:06.139 ' 00:09:06.139 11:20:01 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:06.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.139 --rc genhtml_branch_coverage=1 00:09:06.139 --rc genhtml_function_coverage=1 00:09:06.139 --rc genhtml_legend=1 00:09:06.139 --rc geninfo_all_blocks=1 00:09:06.139 --rc geninfo_unexecuted_blocks=1 00:09:06.139 00:09:06.139 ' 00:09:06.139 11:20:01 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:06.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.139 --rc genhtml_branch_coverage=1 00:09:06.139 --rc genhtml_function_coverage=1 00:09:06.139 --rc genhtml_legend=1 00:09:06.139 --rc geninfo_all_blocks=1 00:09:06.139 --rc geninfo_unexecuted_blocks=1 00:09:06.139 00:09:06.139 ' 00:09:06.139 11:20:01 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:06.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.139 --rc genhtml_branch_coverage=1 00:09:06.139 --rc genhtml_function_coverage=1 00:09:06.139 --rc genhtml_legend=1 00:09:06.139 --rc geninfo_all_blocks=1 00:09:06.139 --rc geninfo_unexecuted_blocks=1 00:09:06.139 00:09:06.139 ' 00:09:06.139 11:20:01 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56732 00:09:06.139 11:20:01 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:06.139 11:20:01 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56732 00:09:06.139 11:20:01 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:06.139 11:20:01 rpc -- common/autotest_common.sh@831 -- # '[' -z 56732 ']' 00:09:06.139 11:20:01 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.139 11:20:01 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:06.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.139 11:20:01 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.139 11:20:01 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:06.139 11:20:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.139 [2024-10-07 11:20:01.499117] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:09:06.139 [2024-10-07 11:20:01.499230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56732 ] 00:09:06.139 [2024-10-07 11:20:01.633809] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.397 [2024-10-07 11:20:01.752132] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:06.397 [2024-10-07 11:20:01.752201] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56732' to capture a snapshot of events at runtime. 00:09:06.397 [2024-10-07 11:20:01.752213] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.397 [2024-10-07 11:20:01.752222] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.397 [2024-10-07 11:20:01.752229] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56732 for offline analysis/debug. 00:09:06.397 [2024-10-07 11:20:01.752668] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.397 [2024-10-07 11:20:01.823572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:07.331 11:20:02 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.331 11:20:02 rpc -- common/autotest_common.sh@864 -- # return 0 00:09:07.331 11:20:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:07.331 11:20:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:07.331 11:20:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:07.331 11:20:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:07.331 11:20:02 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:07.331 11:20:02 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.331 11:20:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.331 ************************************ 00:09:07.331 START TEST rpc_integrity 00:09:07.331 ************************************ 00:09:07.331 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:09:07.331 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:07.331 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.331 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:07.331 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.331 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:07.331 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:07.331 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:07.331 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:07.331 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.331 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:07.332 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.332 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:07.332 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:07.332 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.332 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:07.332 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.332 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:07.332 { 00:09:07.332 "name": "Malloc0", 00:09:07.332 "aliases": [ 00:09:07.332 "df50601c-99b8-423f-b1f2-88da942d86a1" 00:09:07.332 ], 00:09:07.332 "product_name": "Malloc disk", 00:09:07.332 "block_size": 512, 00:09:07.332 "num_blocks": 16384, 00:09:07.332 "uuid": "df50601c-99b8-423f-b1f2-88da942d86a1", 00:09:07.332 "assigned_rate_limits": { 00:09:07.332 "rw_ios_per_sec": 0, 00:09:07.332 "rw_mbytes_per_sec": 0, 00:09:07.332 "r_mbytes_per_sec": 0, 00:09:07.332 "w_mbytes_per_sec": 0 00:09:07.332 }, 00:09:07.332 "claimed": false, 00:09:07.332 "zoned": false, 00:09:07.332 "supported_io_types": { 00:09:07.332 "read": true, 00:09:07.332 "write": true, 00:09:07.332 "unmap": true, 00:09:07.332 "flush": true, 00:09:07.332 "reset": true, 00:09:07.332 "nvme_admin": false, 00:09:07.332 "nvme_io": false, 00:09:07.332 "nvme_io_md": false, 00:09:07.332 "write_zeroes": true, 00:09:07.332 "zcopy": true, 00:09:07.332 "get_zone_info": false, 00:09:07.332 "zone_management": false, 00:09:07.332 "zone_append": false, 00:09:07.332 "compare": false, 00:09:07.332 "compare_and_write": false, 00:09:07.332 "abort": true, 00:09:07.332 "seek_hole": false, 00:09:07.332 "seek_data": false, 00:09:07.332 "copy": true, 00:09:07.332 "nvme_iov_md": false 00:09:07.332 }, 00:09:07.332 "memory_domains": [ 00:09:07.332 { 00:09:07.332 "dma_device_id": "system", 00:09:07.332 "dma_device_type": 1 00:09:07.332 }, 00:09:07.332 { 00:09:07.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.332 "dma_device_type": 2 00:09:07.332 } 00:09:07.332 ], 00:09:07.332 "driver_specific": {} 00:09:07.332 } 00:09:07.332 ]' 00:09:07.332 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:07.332 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:07.332 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:07.332 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.332 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:07.332 [2024-10-07 11:20:02.677023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:07.332 [2024-10-07 11:20:02.677112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.332 [2024-10-07 11:20:02.677148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10d8120 00:09:07.332 [2024-10-07 11:20:02.677164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.332 [2024-10-07 11:20:02.679196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.332 [2024-10-07 11:20:02.679425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:07.332 Passthru0 00:09:07.332 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.332 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:07.332 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.332 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:07.332 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.332 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:07.332 { 00:09:07.332 "name": "Malloc0", 00:09:07.332 "aliases": [ 00:09:07.332 "df50601c-99b8-423f-b1f2-88da942d86a1" 00:09:07.332 ], 00:09:07.332 "product_name": "Malloc disk", 00:09:07.332 "block_size": 512, 00:09:07.332 "num_blocks": 16384, 00:09:07.332 "uuid": "df50601c-99b8-423f-b1f2-88da942d86a1", 00:09:07.332 "assigned_rate_limits": { 00:09:07.332 "rw_ios_per_sec": 0, 00:09:07.332 "rw_mbytes_per_sec": 0, 00:09:07.332 "r_mbytes_per_sec": 0, 00:09:07.332 "w_mbytes_per_sec": 0 00:09:07.332 }, 00:09:07.332 "claimed": true, 00:09:07.332 "claim_type": "exclusive_write", 00:09:07.332 "zoned": false, 00:09:07.332 "supported_io_types": { 00:09:07.332 "read": true, 00:09:07.332 "write": true, 00:09:07.332 "unmap": true, 00:09:07.332 "flush": true, 00:09:07.332 "reset": true, 00:09:07.332 "nvme_admin": false, 00:09:07.332 "nvme_io": false, 00:09:07.332 "nvme_io_md": false, 00:09:07.332 "write_zeroes": true, 00:09:07.332 "zcopy": true, 00:09:07.332 "get_zone_info": false, 00:09:07.332 "zone_management": false, 00:09:07.332 "zone_append": false, 00:09:07.332 "compare": false, 00:09:07.332 "compare_and_write": false, 00:09:07.332 "abort": true, 00:09:07.332 "seek_hole": false, 00:09:07.332 "seek_data": false, 00:09:07.332 "copy": true, 00:09:07.332 "nvme_iov_md": false 00:09:07.332 }, 00:09:07.332 "memory_domains": [ 00:09:07.332 { 00:09:07.332 "dma_device_id": "system", 00:09:07.332 "dma_device_type": 1 00:09:07.332 }, 00:09:07.332 { 00:09:07.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.332 "dma_device_type": 2 00:09:07.332 } 00:09:07.332 ], 00:09:07.332 "driver_specific": {} 00:09:07.332 }, 00:09:07.332 { 00:09:07.332 "name": "Passthru0", 00:09:07.332 "aliases": [ 00:09:07.332 "ce5bd4bf-8e91-58b1-8846-e2dbac0e9957" 00:09:07.332 ], 00:09:07.332 "product_name": "passthru", 00:09:07.332 "block_size": 512, 00:09:07.332 "num_blocks": 16384, 00:09:07.332 "uuid": "ce5bd4bf-8e91-58b1-8846-e2dbac0e9957", 00:09:07.332 "assigned_rate_limits": { 00:09:07.332 "rw_ios_per_sec": 0, 00:09:07.332 "rw_mbytes_per_sec": 0, 00:09:07.332 "r_mbytes_per_sec": 0, 00:09:07.332 "w_mbytes_per_sec": 0 00:09:07.332 }, 00:09:07.332 "claimed": false, 00:09:07.332 "zoned": false, 00:09:07.332 "supported_io_types": { 00:09:07.332 "read": true, 00:09:07.332 "write": true, 00:09:07.332 "unmap": true, 00:09:07.332 "flush": true, 00:09:07.332 "reset": true, 00:09:07.332 "nvme_admin": false, 00:09:07.332 "nvme_io": false, 00:09:07.332 "nvme_io_md": false, 00:09:07.332 "write_zeroes": true, 00:09:07.333 "zcopy": true, 00:09:07.333 "get_zone_info": false, 00:09:07.333 "zone_management": false, 00:09:07.333 "zone_append": false, 00:09:07.333 "compare": false, 00:09:07.333 "compare_and_write": false, 00:09:07.333 "abort": true, 00:09:07.333 "seek_hole": false, 00:09:07.333 "seek_data": false, 00:09:07.333 "copy": true, 00:09:07.333 "nvme_iov_md": false 00:09:07.333 }, 00:09:07.333 "memory_domains": [ 00:09:07.333 { 00:09:07.333 "dma_device_id": "system", 00:09:07.333 "dma_device_type": 1 00:09:07.333 }, 00:09:07.333 { 00:09:07.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.333 "dma_device_type": 2 00:09:07.333 } 00:09:07.333 ], 00:09:07.333 "driver_specific": { 00:09:07.333 "passthru": { 00:09:07.333 "name": "Passthru0", 00:09:07.333 "base_bdev_name": "Malloc0" 00:09:07.333 } 00:09:07.333 } 00:09:07.333 } 00:09:07.333 ]' 00:09:07.333 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:07.333 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:07.333 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:07.333 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.333 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:07.333 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.333 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:07.333 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.333 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:07.333 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.333 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:07.333 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.333 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:07.333 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.333 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:07.333 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:07.333 11:20:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:07.333 ************************************ 00:09:07.333 END TEST rpc_integrity 00:09:07.333 ************************************ 00:09:07.333 00:09:07.333 real 0m0.330s 00:09:07.333 user 0m0.216s 00:09:07.333 sys 0m0.042s 00:09:07.333 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.333 11:20:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:07.591 11:20:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:07.591 11:20:02 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:07.591 11:20:02 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.591 11:20:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.591 ************************************ 00:09:07.591 START TEST rpc_plugins 00:09:07.591 ************************************ 00:09:07.591 11:20:02 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:09:07.591 11:20:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:07.591 11:20:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.591 11:20:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:07.591 11:20:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.591 11:20:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:07.591 11:20:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:07.591 11:20:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.591 11:20:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:07.591 11:20:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.591 11:20:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:07.591 { 00:09:07.591 "name": "Malloc1", 00:09:07.591 "aliases": [ 00:09:07.591 "2705fddc-f235-42f7-9f51-132578c15878" 00:09:07.591 ], 00:09:07.591 "product_name": "Malloc disk", 00:09:07.591 "block_size": 4096, 00:09:07.591 "num_blocks": 256, 00:09:07.591 "uuid": "2705fddc-f235-42f7-9f51-132578c15878", 00:09:07.591 "assigned_rate_limits": { 00:09:07.591 "rw_ios_per_sec": 0, 00:09:07.591 "rw_mbytes_per_sec": 0, 00:09:07.591 "r_mbytes_per_sec": 0, 00:09:07.591 "w_mbytes_per_sec": 0 00:09:07.591 }, 00:09:07.591 "claimed": false, 00:09:07.591 "zoned": false, 00:09:07.591 "supported_io_types": { 00:09:07.591 "read": true, 00:09:07.591 "write": true, 00:09:07.591 "unmap": true, 00:09:07.591 "flush": true, 00:09:07.592 "reset": true, 00:09:07.592 "nvme_admin": false, 00:09:07.592 "nvme_io": false, 00:09:07.592 "nvme_io_md": false, 00:09:07.592 "write_zeroes": true, 00:09:07.592 "zcopy": true, 00:09:07.592 "get_zone_info": false, 00:09:07.592 "zone_management": false, 00:09:07.592 "zone_append": false, 00:09:07.592 "compare": false, 00:09:07.592 "compare_and_write": false, 00:09:07.592 "abort": true, 00:09:07.592 "seek_hole": false, 00:09:07.592 "seek_data": false, 00:09:07.592 "copy": true, 00:09:07.592 "nvme_iov_md": false 00:09:07.592 }, 00:09:07.592 "memory_domains": [ 00:09:07.592 { 00:09:07.592 "dma_device_id": "system", 00:09:07.592 "dma_device_type": 1 00:09:07.592 }, 00:09:07.592 { 00:09:07.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.592 "dma_device_type": 2 00:09:07.592 } 00:09:07.592 ], 00:09:07.592 "driver_specific": {} 00:09:07.592 } 00:09:07.592 ]' 00:09:07.592 11:20:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:07.592 11:20:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:07.592 11:20:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:07.592 11:20:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.592 11:20:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 11:20:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.592 11:20:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:07.592 11:20:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.592 11:20:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 11:20:03 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.592 11:20:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:07.592 11:20:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:07.592 ************************************ 00:09:07.592 END TEST rpc_plugins 00:09:07.592 ************************************ 00:09:07.592 11:20:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:07.592 00:09:07.592 real 0m0.161s 00:09:07.592 user 0m0.105s 00:09:07.592 sys 0m0.017s 00:09:07.592 11:20:03 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.592 11:20:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 11:20:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:07.592 11:20:03 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:07.592 11:20:03 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.592 11:20:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 ************************************ 00:09:07.592 START TEST rpc_trace_cmd_test 00:09:07.592 ************************************ 00:09:07.592 11:20:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:09:07.592 11:20:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:07.592 11:20:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:07.592 11:20:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.592 11:20:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.850 11:20:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.850 11:20:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:07.850 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56732", 00:09:07.850 "tpoint_group_mask": "0x8", 00:09:07.850 "iscsi_conn": { 00:09:07.850 "mask": "0x2", 00:09:07.850 "tpoint_mask": "0x0" 00:09:07.850 }, 00:09:07.850 "scsi": { 00:09:07.850 "mask": "0x4", 00:09:07.850 "tpoint_mask": "0x0" 00:09:07.850 }, 00:09:07.850 "bdev": { 00:09:07.850 "mask": "0x8", 00:09:07.850 "tpoint_mask": "0xffffffffffffffff" 00:09:07.850 }, 00:09:07.850 "nvmf_rdma": { 00:09:07.850 "mask": "0x10", 00:09:07.850 "tpoint_mask": "0x0" 00:09:07.850 }, 00:09:07.850 "nvmf_tcp": { 00:09:07.850 "mask": "0x20", 00:09:07.850 "tpoint_mask": "0x0" 00:09:07.850 }, 00:09:07.850 "ftl": { 00:09:07.850 "mask": "0x40", 00:09:07.850 "tpoint_mask": "0x0" 00:09:07.850 }, 00:09:07.850 "blobfs": { 00:09:07.850 "mask": "0x80", 00:09:07.850 "tpoint_mask": "0x0" 00:09:07.850 }, 00:09:07.850 "dsa": { 00:09:07.850 "mask": "0x200", 00:09:07.850 "tpoint_mask": "0x0" 00:09:07.850 }, 00:09:07.850 "thread": { 00:09:07.850 "mask": "0x400", 00:09:07.850 "tpoint_mask": "0x0" 00:09:07.850 }, 00:09:07.850 "nvme_pcie": { 00:09:07.850 "mask": "0x800", 00:09:07.850 "tpoint_mask": "0x0" 00:09:07.850 }, 00:09:07.850 "iaa": { 00:09:07.850 "mask": "0x1000", 00:09:07.850 "tpoint_mask": "0x0" 00:09:07.850 }, 00:09:07.850 "nvme_tcp": { 00:09:07.850 "mask": "0x2000", 00:09:07.850 "tpoint_mask": "0x0" 00:09:07.850 }, 00:09:07.850 "bdev_nvme": { 00:09:07.850 "mask": "0x4000", 00:09:07.850 "tpoint_mask": "0x0" 00:09:07.850 }, 00:09:07.850 "sock": { 00:09:07.850 "mask": "0x8000", 00:09:07.850 "tpoint_mask": "0x0" 00:09:07.850 }, 00:09:07.850 "blob": { 00:09:07.850 "mask": "0x10000", 00:09:07.850 "tpoint_mask": "0x0" 00:09:07.850 }, 00:09:07.850 "bdev_raid": { 00:09:07.850 "mask": "0x20000", 00:09:07.850 "tpoint_mask": "0x0" 00:09:07.850 }, 00:09:07.850 "scheduler": { 00:09:07.850 "mask": "0x40000", 00:09:07.850 "tpoint_mask": "0x0" 00:09:07.850 } 00:09:07.850 }' 00:09:07.850 11:20:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:07.850 11:20:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:07.850 11:20:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:07.850 11:20:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:07.850 11:20:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:07.850 11:20:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:07.850 11:20:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:07.850 11:20:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:07.850 11:20:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:08.109 ************************************ 00:09:08.109 END TEST rpc_trace_cmd_test 00:09:08.109 ************************************ 00:09:08.109 11:20:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:08.109 00:09:08.109 real 0m0.276s 00:09:08.109 user 0m0.237s 00:09:08.109 sys 0m0.027s 00:09:08.109 11:20:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.109 11:20:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.109 11:20:03 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:08.109 11:20:03 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:08.109 11:20:03 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:08.109 11:20:03 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:08.109 11:20:03 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.109 11:20:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.109 ************************************ 00:09:08.109 START TEST rpc_daemon_integrity 00:09:08.109 ************************************ 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:08.109 { 00:09:08.109 "name": "Malloc2", 00:09:08.109 "aliases": [ 00:09:08.109 "203fa587-fae3-4309-9dee-6f0a362ba3fc" 00:09:08.109 ], 00:09:08.109 "product_name": "Malloc disk", 00:09:08.109 "block_size": 512, 00:09:08.109 "num_blocks": 16384, 00:09:08.109 "uuid": "203fa587-fae3-4309-9dee-6f0a362ba3fc", 00:09:08.109 "assigned_rate_limits": { 00:09:08.109 "rw_ios_per_sec": 0, 00:09:08.109 "rw_mbytes_per_sec": 0, 00:09:08.109 "r_mbytes_per_sec": 0, 00:09:08.109 "w_mbytes_per_sec": 0 00:09:08.109 }, 00:09:08.109 "claimed": false, 00:09:08.109 "zoned": false, 00:09:08.109 "supported_io_types": { 00:09:08.109 "read": true, 00:09:08.109 "write": true, 00:09:08.109 "unmap": true, 00:09:08.109 "flush": true, 00:09:08.109 "reset": true, 00:09:08.109 "nvme_admin": false, 00:09:08.109 "nvme_io": false, 00:09:08.109 "nvme_io_md": false, 00:09:08.109 "write_zeroes": true, 00:09:08.109 "zcopy": true, 00:09:08.109 "get_zone_info": false, 00:09:08.109 "zone_management": false, 00:09:08.109 "zone_append": false, 00:09:08.109 "compare": false, 00:09:08.109 "compare_and_write": false, 00:09:08.109 "abort": true, 00:09:08.109 "seek_hole": false, 00:09:08.109 "seek_data": false, 00:09:08.109 "copy": true, 00:09:08.109 "nvme_iov_md": false 00:09:08.109 }, 00:09:08.109 "memory_domains": [ 00:09:08.109 { 00:09:08.109 "dma_device_id": "system", 00:09:08.109 "dma_device_type": 1 00:09:08.109 }, 00:09:08.109 { 00:09:08.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.109 "dma_device_type": 2 00:09:08.109 } 00:09:08.109 ], 00:09:08.109 "driver_specific": {} 00:09:08.109 } 00:09:08.109 ]' 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.109 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:08.109 [2024-10-07 11:20:03.565627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:08.109 [2024-10-07 11:20:03.565688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.109 [2024-10-07 11:20:03.565709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10e5a90 00:09:08.109 [2024-10-07 11:20:03.565719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.110 [2024-10-07 11:20:03.567825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.110 [2024-10-07 11:20:03.567862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:08.110 Passthru0 00:09:08.110 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.110 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:08.110 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.110 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:08.110 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.110 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:08.110 { 00:09:08.110 "name": "Malloc2", 00:09:08.110 "aliases": [ 00:09:08.110 "203fa587-fae3-4309-9dee-6f0a362ba3fc" 00:09:08.110 ], 00:09:08.110 "product_name": "Malloc disk", 00:09:08.110 "block_size": 512, 00:09:08.110 "num_blocks": 16384, 00:09:08.110 "uuid": "203fa587-fae3-4309-9dee-6f0a362ba3fc", 00:09:08.110 "assigned_rate_limits": { 00:09:08.110 "rw_ios_per_sec": 0, 00:09:08.110 "rw_mbytes_per_sec": 0, 00:09:08.110 "r_mbytes_per_sec": 0, 00:09:08.110 "w_mbytes_per_sec": 0 00:09:08.110 }, 00:09:08.110 "claimed": true, 00:09:08.110 "claim_type": "exclusive_write", 00:09:08.110 "zoned": false, 00:09:08.110 "supported_io_types": { 00:09:08.110 "read": true, 00:09:08.110 "write": true, 00:09:08.110 "unmap": true, 00:09:08.110 "flush": true, 00:09:08.110 "reset": true, 00:09:08.110 "nvme_admin": false, 00:09:08.110 "nvme_io": false, 00:09:08.110 "nvme_io_md": false, 00:09:08.110 "write_zeroes": true, 00:09:08.110 "zcopy": true, 00:09:08.110 "get_zone_info": false, 00:09:08.110 "zone_management": false, 00:09:08.110 "zone_append": false, 00:09:08.110 "compare": false, 00:09:08.110 "compare_and_write": false, 00:09:08.110 "abort": true, 00:09:08.110 "seek_hole": false, 00:09:08.110 "seek_data": false, 00:09:08.110 "copy": true, 00:09:08.110 "nvme_iov_md": false 00:09:08.110 }, 00:09:08.110 "memory_domains": [ 00:09:08.110 { 00:09:08.110 "dma_device_id": "system", 00:09:08.110 "dma_device_type": 1 00:09:08.110 }, 00:09:08.110 { 00:09:08.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.110 "dma_device_type": 2 00:09:08.110 } 00:09:08.110 ], 00:09:08.110 "driver_specific": {} 00:09:08.110 }, 00:09:08.110 { 00:09:08.110 "name": "Passthru0", 00:09:08.110 "aliases": [ 00:09:08.110 "bf9bb401-a429-5046-b585-650af19c2760" 00:09:08.110 ], 00:09:08.110 "product_name": "passthru", 00:09:08.110 "block_size": 512, 00:09:08.110 "num_blocks": 16384, 00:09:08.110 "uuid": "bf9bb401-a429-5046-b585-650af19c2760", 00:09:08.110 "assigned_rate_limits": { 00:09:08.110 "rw_ios_per_sec": 0, 00:09:08.110 "rw_mbytes_per_sec": 0, 00:09:08.110 "r_mbytes_per_sec": 0, 00:09:08.110 "w_mbytes_per_sec": 0 00:09:08.110 }, 00:09:08.110 "claimed": false, 00:09:08.110 "zoned": false, 00:09:08.110 "supported_io_types": { 00:09:08.110 "read": true, 00:09:08.110 "write": true, 00:09:08.110 "unmap": true, 00:09:08.110 "flush": true, 00:09:08.110 "reset": true, 00:09:08.110 "nvme_admin": false, 00:09:08.110 "nvme_io": false, 00:09:08.110 "nvme_io_md": false, 00:09:08.110 "write_zeroes": true, 00:09:08.110 "zcopy": true, 00:09:08.110 "get_zone_info": false, 00:09:08.110 "zone_management": false, 00:09:08.110 "zone_append": false, 00:09:08.110 "compare": false, 00:09:08.110 "compare_and_write": false, 00:09:08.110 "abort": true, 00:09:08.110 "seek_hole": false, 00:09:08.110 "seek_data": false, 00:09:08.110 "copy": true, 00:09:08.110 "nvme_iov_md": false 00:09:08.110 }, 00:09:08.110 "memory_domains": [ 00:09:08.110 { 00:09:08.110 "dma_device_id": "system", 00:09:08.110 "dma_device_type": 1 00:09:08.110 }, 00:09:08.110 { 00:09:08.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.110 "dma_device_type": 2 00:09:08.110 } 00:09:08.110 ], 00:09:08.110 "driver_specific": { 00:09:08.110 "passthru": { 00:09:08.110 "name": "Passthru0", 00:09:08.110 "base_bdev_name": "Malloc2" 00:09:08.110 } 00:09:08.110 } 00:09:08.110 } 00:09:08.110 ]' 00:09:08.110 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:08.368 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:08.368 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:08.368 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.368 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:08.368 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.368 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:08.368 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.368 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:08.368 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.368 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:08.368 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.368 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:08.368 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.368 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:08.368 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:08.368 ************************************ 00:09:08.368 END TEST rpc_daemon_integrity 00:09:08.368 ************************************ 00:09:08.368 11:20:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:08.368 00:09:08.368 real 0m0.308s 00:09:08.368 user 0m0.204s 00:09:08.368 sys 0m0.042s 00:09:08.368 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.368 11:20:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:08.368 11:20:03 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:08.368 11:20:03 rpc -- rpc/rpc.sh@84 -- # killprocess 56732 00:09:08.368 11:20:03 rpc -- common/autotest_common.sh@950 -- # '[' -z 56732 ']' 00:09:08.368 11:20:03 rpc -- common/autotest_common.sh@954 -- # kill -0 56732 00:09:08.368 11:20:03 rpc -- common/autotest_common.sh@955 -- # uname 00:09:08.368 11:20:03 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.368 11:20:03 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56732 00:09:08.368 killing process with pid 56732 00:09:08.368 11:20:03 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:08.368 11:20:03 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:08.368 11:20:03 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56732' 00:09:08.368 11:20:03 rpc -- common/autotest_common.sh@969 -- # kill 56732 00:09:08.368 11:20:03 rpc -- common/autotest_common.sh@974 -- # wait 56732 00:09:08.935 00:09:08.935 real 0m2.946s 00:09:08.935 user 0m3.800s 00:09:08.935 sys 0m0.687s 00:09:08.935 11:20:04 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.935 ************************************ 00:09:08.935 END TEST rpc 00:09:08.935 ************************************ 00:09:08.935 11:20:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.935 11:20:04 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:08.935 11:20:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:08.935 11:20:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.935 11:20:04 -- common/autotest_common.sh@10 -- # set +x 00:09:08.935 ************************************ 00:09:08.935 START TEST skip_rpc 00:09:08.935 ************************************ 00:09:08.935 11:20:04 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:08.935 * Looking for test storage... 00:09:08.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:08.935 11:20:04 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:08.935 11:20:04 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:08.935 11:20:04 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:08.935 11:20:04 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.935 11:20:04 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:08.935 11:20:04 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.935 11:20:04 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:08.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.935 --rc genhtml_branch_coverage=1 00:09:08.935 --rc genhtml_function_coverage=1 00:09:08.935 --rc genhtml_legend=1 00:09:08.935 --rc geninfo_all_blocks=1 00:09:08.936 --rc geninfo_unexecuted_blocks=1 00:09:08.936 00:09:08.936 ' 00:09:08.936 11:20:04 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:08.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.936 --rc genhtml_branch_coverage=1 00:09:08.936 --rc genhtml_function_coverage=1 00:09:08.936 --rc genhtml_legend=1 00:09:08.936 --rc geninfo_all_blocks=1 00:09:08.936 --rc geninfo_unexecuted_blocks=1 00:09:08.936 00:09:08.936 ' 00:09:08.936 11:20:04 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:08.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.936 --rc genhtml_branch_coverage=1 00:09:08.936 --rc genhtml_function_coverage=1 00:09:08.936 --rc genhtml_legend=1 00:09:08.936 --rc geninfo_all_blocks=1 00:09:08.936 --rc geninfo_unexecuted_blocks=1 00:09:08.936 00:09:08.936 ' 00:09:08.936 11:20:04 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:08.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.936 --rc genhtml_branch_coverage=1 00:09:08.936 --rc genhtml_function_coverage=1 00:09:08.936 --rc genhtml_legend=1 00:09:08.936 --rc geninfo_all_blocks=1 00:09:08.936 --rc geninfo_unexecuted_blocks=1 00:09:08.936 00:09:08.936 ' 00:09:08.936 11:20:04 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:08.936 11:20:04 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:08.936 11:20:04 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:08.936 11:20:04 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:08.936 11:20:04 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.936 11:20:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.194 ************************************ 00:09:09.194 START TEST skip_rpc 00:09:09.194 ************************************ 00:09:09.194 11:20:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:09:09.194 11:20:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56938 00:09:09.194 11:20:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:09.194 11:20:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:09.194 11:20:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:09.194 [2024-10-07 11:20:04.533735] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:09:09.194 [2024-10-07 11:20:04.533853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56938 ] 00:09:09.194 [2024-10-07 11:20:04.672831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.452 [2024-10-07 11:20:04.804845] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.452 [2024-10-07 11:20:04.884150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56938 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 56938 ']' 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 56938 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56938 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56938' 00:09:14.719 killing process with pid 56938 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 56938 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 56938 00:09:14.719 ************************************ 00:09:14.719 END TEST skip_rpc 00:09:14.719 ************************************ 00:09:14.719 00:09:14.719 real 0m5.455s 00:09:14.719 user 0m5.066s 00:09:14.719 sys 0m0.292s 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.719 11:20:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.719 11:20:09 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:14.719 11:20:09 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:14.719 11:20:09 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.719 11:20:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.719 ************************************ 00:09:14.719 START TEST skip_rpc_with_json 00:09:14.719 ************************************ 00:09:14.719 11:20:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:09:14.719 11:20:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:14.719 11:20:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57023 00:09:14.719 11:20:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:14.719 11:20:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57023 00:09:14.719 11:20:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:14.719 11:20:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57023 ']' 00:09:14.719 11:20:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.719 11:20:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:14.719 11:20:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.719 11:20:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:14.719 11:20:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:14.719 [2024-10-07 11:20:10.055344] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:09:14.719 [2024-10-07 11:20:10.057726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57023 ] 00:09:14.719 [2024-10-07 11:20:10.199435] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.977 [2024-10-07 11:20:10.317567] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.977 [2024-10-07 11:20:10.392052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.562 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:15.562 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:09:15.562 11:20:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:15.562 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.562 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:15.562 [2024-10-07 11:20:11.084147] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:15.821 request: 00:09:15.821 { 00:09:15.821 "trtype": "tcp", 00:09:15.821 "method": "nvmf_get_transports", 00:09:15.821 "req_id": 1 00:09:15.821 } 00:09:15.821 Got JSON-RPC error response 00:09:15.821 response: 00:09:15.821 { 00:09:15.821 "code": -19, 00:09:15.821 "message": "No such device" 00:09:15.821 } 00:09:15.821 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:15.821 11:20:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:15.821 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.821 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:15.821 [2024-10-07 11:20:11.096254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.821 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.821 11:20:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:15.821 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.821 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:15.821 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.821 11:20:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:15.821 { 00:09:15.821 "subsystems": [ 00:09:15.821 { 00:09:15.821 "subsystem": "fsdev", 00:09:15.821 "config": [ 00:09:15.821 { 00:09:15.821 "method": "fsdev_set_opts", 00:09:15.821 "params": { 00:09:15.821 "fsdev_io_pool_size": 65535, 00:09:15.821 "fsdev_io_cache_size": 256 00:09:15.821 } 00:09:15.821 } 00:09:15.821 ] 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "subsystem": "keyring", 00:09:15.821 "config": [] 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "subsystem": "iobuf", 00:09:15.821 "config": [ 00:09:15.821 { 00:09:15.821 "method": "iobuf_set_options", 00:09:15.821 "params": { 00:09:15.821 "small_pool_count": 8192, 00:09:15.821 "large_pool_count": 1024, 00:09:15.821 "small_bufsize": 8192, 00:09:15.821 "large_bufsize": 135168 00:09:15.821 } 00:09:15.821 } 00:09:15.821 ] 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "subsystem": "sock", 00:09:15.821 "config": [ 00:09:15.821 { 00:09:15.821 "method": "sock_set_default_impl", 00:09:15.821 "params": { 00:09:15.821 "impl_name": "uring" 00:09:15.821 } 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "method": "sock_impl_set_options", 00:09:15.821 "params": { 00:09:15.821 "impl_name": "ssl", 00:09:15.821 "recv_buf_size": 4096, 00:09:15.821 "send_buf_size": 4096, 00:09:15.821 "enable_recv_pipe": true, 00:09:15.821 "enable_quickack": false, 00:09:15.821 "enable_placement_id": 0, 00:09:15.821 "enable_zerocopy_send_server": true, 00:09:15.821 "enable_zerocopy_send_client": false, 00:09:15.821 "zerocopy_threshold": 0, 00:09:15.821 "tls_version": 0, 00:09:15.821 "enable_ktls": false 00:09:15.821 } 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "method": "sock_impl_set_options", 00:09:15.821 "params": { 00:09:15.821 "impl_name": "posix", 00:09:15.821 "recv_buf_size": 2097152, 00:09:15.821 "send_buf_size": 2097152, 00:09:15.821 "enable_recv_pipe": true, 00:09:15.821 "enable_quickack": false, 00:09:15.821 "enable_placement_id": 0, 00:09:15.821 "enable_zerocopy_send_server": true, 00:09:15.821 "enable_zerocopy_send_client": false, 00:09:15.821 "zerocopy_threshold": 0, 00:09:15.821 "tls_version": 0, 00:09:15.821 "enable_ktls": false 00:09:15.821 } 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "method": "sock_impl_set_options", 00:09:15.821 "params": { 00:09:15.821 "impl_name": "uring", 00:09:15.821 "recv_buf_size": 2097152, 00:09:15.821 "send_buf_size": 2097152, 00:09:15.821 "enable_recv_pipe": true, 00:09:15.821 "enable_quickack": false, 00:09:15.821 "enable_placement_id": 0, 00:09:15.821 "enable_zerocopy_send_server": false, 00:09:15.821 "enable_zerocopy_send_client": false, 00:09:15.821 "zerocopy_threshold": 0, 00:09:15.821 "tls_version": 0, 00:09:15.821 "enable_ktls": false 00:09:15.821 } 00:09:15.821 } 00:09:15.821 ] 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "subsystem": "vmd", 00:09:15.821 "config": [] 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "subsystem": "accel", 00:09:15.821 "config": [ 00:09:15.821 { 00:09:15.821 "method": "accel_set_options", 00:09:15.821 "params": { 00:09:15.821 "small_cache_size": 128, 00:09:15.821 "large_cache_size": 16, 00:09:15.821 "task_count": 2048, 00:09:15.821 "sequence_count": 2048, 00:09:15.821 "buf_count": 2048 00:09:15.821 } 00:09:15.821 } 00:09:15.821 ] 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "subsystem": "bdev", 00:09:15.821 "config": [ 00:09:15.821 { 00:09:15.821 "method": "bdev_set_options", 00:09:15.821 "params": { 00:09:15.821 "bdev_io_pool_size": 65535, 00:09:15.821 "bdev_io_cache_size": 256, 00:09:15.821 "bdev_auto_examine": true, 00:09:15.821 "iobuf_small_cache_size": 128, 00:09:15.821 "iobuf_large_cache_size": 16 00:09:15.821 } 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "method": "bdev_raid_set_options", 00:09:15.821 "params": { 00:09:15.821 "process_window_size_kb": 1024, 00:09:15.821 "process_max_bandwidth_mb_sec": 0 00:09:15.821 } 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "method": "bdev_iscsi_set_options", 00:09:15.821 "params": { 00:09:15.821 "timeout_sec": 30 00:09:15.821 } 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "method": "bdev_nvme_set_options", 00:09:15.821 "params": { 00:09:15.821 "action_on_timeout": "none", 00:09:15.821 "timeout_us": 0, 00:09:15.821 "timeout_admin_us": 0, 00:09:15.821 "keep_alive_timeout_ms": 10000, 00:09:15.821 "arbitration_burst": 0, 00:09:15.821 "low_priority_weight": 0, 00:09:15.821 "medium_priority_weight": 0, 00:09:15.821 "high_priority_weight": 0, 00:09:15.821 "nvme_adminq_poll_period_us": 10000, 00:09:15.821 "nvme_ioq_poll_period_us": 0, 00:09:15.821 "io_queue_requests": 0, 00:09:15.821 "delay_cmd_submit": true, 00:09:15.821 "transport_retry_count": 4, 00:09:15.821 "bdev_retry_count": 3, 00:09:15.821 "transport_ack_timeout": 0, 00:09:15.821 "ctrlr_loss_timeout_sec": 0, 00:09:15.821 "reconnect_delay_sec": 0, 00:09:15.821 "fast_io_fail_timeout_sec": 0, 00:09:15.821 "disable_auto_failback": false, 00:09:15.821 "generate_uuids": false, 00:09:15.821 "transport_tos": 0, 00:09:15.821 "nvme_error_stat": false, 00:09:15.821 "rdma_srq_size": 0, 00:09:15.821 "io_path_stat": false, 00:09:15.821 "allow_accel_sequence": false, 00:09:15.821 "rdma_max_cq_size": 0, 00:09:15.821 "rdma_cm_event_timeout_ms": 0, 00:09:15.821 "dhchap_digests": [ 00:09:15.821 "sha256", 00:09:15.821 "sha384", 00:09:15.821 "sha512" 00:09:15.821 ], 00:09:15.821 "dhchap_dhgroups": [ 00:09:15.821 "null", 00:09:15.821 "ffdhe2048", 00:09:15.821 "ffdhe3072", 00:09:15.821 "ffdhe4096", 00:09:15.821 "ffdhe6144", 00:09:15.821 "ffdhe8192" 00:09:15.821 ] 00:09:15.821 } 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "method": "bdev_nvme_set_hotplug", 00:09:15.821 "params": { 00:09:15.821 "period_us": 100000, 00:09:15.821 "enable": false 00:09:15.821 } 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "method": "bdev_wait_for_examine" 00:09:15.821 } 00:09:15.821 ] 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "subsystem": "scsi", 00:09:15.821 "config": null 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "subsystem": "scheduler", 00:09:15.821 "config": [ 00:09:15.821 { 00:09:15.821 "method": "framework_set_scheduler", 00:09:15.821 "params": { 00:09:15.821 "name": "static" 00:09:15.821 } 00:09:15.821 } 00:09:15.821 ] 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "subsystem": "vhost_scsi", 00:09:15.821 "config": [] 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "subsystem": "vhost_blk", 00:09:15.821 "config": [] 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "subsystem": "ublk", 00:09:15.821 "config": [] 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "subsystem": "nbd", 00:09:15.821 "config": [] 00:09:15.821 }, 00:09:15.821 { 00:09:15.821 "subsystem": "nvmf", 00:09:15.821 "config": [ 00:09:15.821 { 00:09:15.821 "method": "nvmf_set_config", 00:09:15.821 "params": { 00:09:15.822 "discovery_filter": "match_any", 00:09:15.822 "admin_cmd_passthru": { 00:09:15.822 "identify_ctrlr": false 00:09:15.822 }, 00:09:15.822 "dhchap_digests": [ 00:09:15.822 "sha256", 00:09:15.822 "sha384", 00:09:15.822 "sha512" 00:09:15.822 ], 00:09:15.822 "dhchap_dhgroups": [ 00:09:15.822 "null", 00:09:15.822 "ffdhe2048", 00:09:15.822 "ffdhe3072", 00:09:15.822 "ffdhe4096", 00:09:15.822 "ffdhe6144", 00:09:15.822 "ffdhe8192" 00:09:15.822 ] 00:09:15.822 } 00:09:15.822 }, 00:09:15.822 { 00:09:15.822 "method": "nvmf_set_max_subsystems", 00:09:15.822 "params": { 00:09:15.822 "max_subsystems": 1024 00:09:15.822 } 00:09:15.822 }, 00:09:15.822 { 00:09:15.822 "method": "nvmf_set_crdt", 00:09:15.822 "params": { 00:09:15.822 "crdt1": 0, 00:09:15.822 "crdt2": 0, 00:09:15.822 "crdt3": 0 00:09:15.822 } 00:09:15.822 }, 00:09:15.822 { 00:09:15.822 "method": "nvmf_create_transport", 00:09:15.822 "params": { 00:09:15.822 "trtype": "TCP", 00:09:15.822 "max_queue_depth": 128, 00:09:15.822 "max_io_qpairs_per_ctrlr": 127, 00:09:15.822 "in_capsule_data_size": 4096, 00:09:15.822 "max_io_size": 131072, 00:09:15.822 "io_unit_size": 131072, 00:09:15.822 "max_aq_depth": 128, 00:09:15.822 "num_shared_buffers": 511, 00:09:15.822 "buf_cache_size": 4294967295, 00:09:15.822 "dif_insert_or_strip": false, 00:09:15.822 "zcopy": false, 00:09:15.822 "c2h_success": true, 00:09:15.822 "sock_priority": 0, 00:09:15.822 "abort_timeout_sec": 1, 00:09:15.822 "ack_timeout": 0, 00:09:15.822 "data_wr_pool_size": 0 00:09:15.822 } 00:09:15.822 } 00:09:15.822 ] 00:09:15.822 }, 00:09:15.822 { 00:09:15.822 "subsystem": "iscsi", 00:09:15.822 "config": [ 00:09:15.822 { 00:09:15.822 "method": "iscsi_set_options", 00:09:15.822 "params": { 00:09:15.822 "node_base": "iqn.2016-06.io.spdk", 00:09:15.822 "max_sessions": 128, 00:09:15.822 "max_connections_per_session": 2, 00:09:15.822 "max_queue_depth": 64, 00:09:15.822 "default_time2wait": 2, 00:09:15.822 "default_time2retain": 20, 00:09:15.822 "first_burst_length": 8192, 00:09:15.822 "immediate_data": true, 00:09:15.822 "allow_duplicated_isid": false, 00:09:15.822 "error_recovery_level": 0, 00:09:15.822 "nop_timeout": 60, 00:09:15.822 "nop_in_interval": 30, 00:09:15.822 "disable_chap": false, 00:09:15.822 "require_chap": false, 00:09:15.822 "mutual_chap": false, 00:09:15.822 "chap_group": 0, 00:09:15.822 "max_large_datain_per_connection": 64, 00:09:15.822 "max_r2t_per_connection": 4, 00:09:15.822 "pdu_pool_size": 36864, 00:09:15.822 "immediate_data_pool_size": 16384, 00:09:15.822 "data_out_pool_size": 2048 00:09:15.822 } 00:09:15.822 } 00:09:15.822 ] 00:09:15.822 } 00:09:15.822 ] 00:09:15.822 } 00:09:15.822 11:20:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:15.822 11:20:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57023 00:09:15.822 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57023 ']' 00:09:15.822 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57023 00:09:15.822 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:09:15.822 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:15.822 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57023 00:09:15.822 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:15.822 killing process with pid 57023 00:09:15.822 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:15.822 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57023' 00:09:15.822 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57023 00:09:15.822 11:20:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57023 00:09:16.389 11:20:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57052 00:09:16.389 11:20:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:16.389 11:20:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:21.715 11:20:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57052 00:09:21.715 11:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57052 ']' 00:09:21.715 11:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57052 00:09:21.715 11:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:09:21.716 11:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.716 11:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57052 00:09:21.716 11:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:21.716 11:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:21.716 killing process with pid 57052 00:09:21.716 11:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57052' 00:09:21.716 11:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57052 00:09:21.716 11:20:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57052 00:09:21.716 11:20:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:21.716 11:20:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:21.716 00:09:21.716 real 0m7.210s 00:09:21.716 user 0m6.987s 00:09:21.716 sys 0m0.682s 00:09:21.716 11:20:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.716 11:20:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:21.716 ************************************ 00:09:21.716 END TEST skip_rpc_with_json 00:09:21.716 ************************************ 00:09:21.716 11:20:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:21.716 11:20:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:21.716 11:20:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.716 11:20:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.716 ************************************ 00:09:21.716 START TEST skip_rpc_with_delay 00:09:21.716 ************************************ 00:09:21.716 11:20:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:09:21.716 11:20:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:21.716 11:20:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:09:21.716 11:20:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:21.716 11:20:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:21.716 11:20:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.716 11:20:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:21.716 11:20:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.716 11:20:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:21.716 11:20:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.716 11:20:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:21.716 11:20:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:21.716 11:20:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:21.974 [2024-10-07 11:20:17.297891] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:21.974 [2024-10-07 11:20:17.298038] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:09:21.974 11:20:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:09:21.974 11:20:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:21.974 11:20:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:21.974 11:20:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:21.974 00:09:21.974 real 0m0.090s 00:09:21.974 user 0m0.053s 00:09:21.974 sys 0m0.036s 00:09:21.974 11:20:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.974 11:20:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:21.974 ************************************ 00:09:21.974 END TEST skip_rpc_with_delay 00:09:21.974 ************************************ 00:09:21.974 11:20:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:21.974 11:20:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:21.974 11:20:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:21.974 11:20:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:21.974 11:20:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.974 11:20:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.974 ************************************ 00:09:21.974 START TEST exit_on_failed_rpc_init 00:09:21.974 ************************************ 00:09:21.974 11:20:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:09:21.974 11:20:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57161 00:09:21.974 11:20:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:21.974 11:20:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57161 00:09:21.974 11:20:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57161 ']' 00:09:21.974 11:20:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.974 11:20:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.974 11:20:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.974 11:20:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.974 11:20:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:21.974 [2024-10-07 11:20:17.440349] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:09:21.974 [2024-10-07 11:20:17.440454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57161 ] 00:09:22.233 [2024-10-07 11:20:17.581888] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.233 [2024-10-07 11:20:17.710657] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.491 [2024-10-07 11:20:17.791258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:23.083 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.083 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:09:23.083 11:20:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:23.083 11:20:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:23.083 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:09:23.083 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:23.083 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:23.083 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.083 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:23.083 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.083 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:23.083 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.083 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:23.083 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:23.083 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:23.083 [2024-10-07 11:20:18.487763] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:09:23.083 [2024-10-07 11:20:18.487866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57179 ] 00:09:23.341 [2024-10-07 11:20:18.625972] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.341 [2024-10-07 11:20:18.748271] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.341 [2024-10-07 11:20:18.748358] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:23.341 [2024-10-07 11:20:18.748374] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:23.341 [2024-10-07 11:20:18.748383] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:23.342 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:09:23.342 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:23.342 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:09:23.342 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:09:23.342 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:09:23.342 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:23.342 11:20:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:23.342 11:20:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57161 00:09:23.342 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57161 ']' 00:09:23.342 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57161 00:09:23.342 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:09:23.342 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.342 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57161 00:09:23.601 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:23.601 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:23.601 killing process with pid 57161 00:09:23.601 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57161' 00:09:23.601 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57161 00:09:23.601 11:20:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57161 00:09:23.860 00:09:23.860 real 0m1.922s 00:09:23.860 user 0m2.256s 00:09:23.860 sys 0m0.445s 00:09:23.860 11:20:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.860 11:20:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:23.860 ************************************ 00:09:23.860 END TEST exit_on_failed_rpc_init 00:09:23.860 ************************************ 00:09:23.860 11:20:19 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:23.860 00:09:23.860 real 0m15.078s 00:09:23.860 user 0m14.550s 00:09:23.860 sys 0m1.658s 00:09:23.860 11:20:19 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.860 11:20:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.860 ************************************ 00:09:23.861 END TEST skip_rpc 00:09:23.861 ************************************ 00:09:23.861 11:20:19 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:23.861 11:20:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:23.861 11:20:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.861 11:20:19 -- common/autotest_common.sh@10 -- # set +x 00:09:24.119 ************************************ 00:09:24.119 START TEST rpc_client 00:09:24.119 ************************************ 00:09:24.119 11:20:19 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:24.119 * Looking for test storage... 00:09:24.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:24.119 11:20:19 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:24.119 11:20:19 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:09:24.119 11:20:19 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:24.119 11:20:19 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:24.119 11:20:19 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.119 11:20:19 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.119 11:20:19 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.119 11:20:19 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.119 11:20:19 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.119 11:20:19 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.119 11:20:19 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.119 11:20:19 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.119 11:20:19 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.120 11:20:19 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:24.120 11:20:19 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.120 11:20:19 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:24.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.120 --rc genhtml_branch_coverage=1 00:09:24.120 --rc genhtml_function_coverage=1 00:09:24.120 --rc genhtml_legend=1 00:09:24.120 --rc geninfo_all_blocks=1 00:09:24.120 --rc geninfo_unexecuted_blocks=1 00:09:24.120 00:09:24.120 ' 00:09:24.120 11:20:19 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:24.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.120 --rc genhtml_branch_coverage=1 00:09:24.120 --rc genhtml_function_coverage=1 00:09:24.120 --rc genhtml_legend=1 00:09:24.120 --rc geninfo_all_blocks=1 00:09:24.120 --rc geninfo_unexecuted_blocks=1 00:09:24.120 00:09:24.120 ' 00:09:24.120 11:20:19 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:24.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.120 --rc genhtml_branch_coverage=1 00:09:24.120 --rc genhtml_function_coverage=1 00:09:24.120 --rc genhtml_legend=1 00:09:24.120 --rc geninfo_all_blocks=1 00:09:24.120 --rc geninfo_unexecuted_blocks=1 00:09:24.120 00:09:24.120 ' 00:09:24.120 11:20:19 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:24.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.120 --rc genhtml_branch_coverage=1 00:09:24.120 --rc genhtml_function_coverage=1 00:09:24.120 --rc genhtml_legend=1 00:09:24.120 --rc geninfo_all_blocks=1 00:09:24.120 --rc geninfo_unexecuted_blocks=1 00:09:24.120 00:09:24.120 ' 00:09:24.120 11:20:19 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:24.120 OK 00:09:24.120 11:20:19 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:24.120 00:09:24.120 real 0m0.193s 00:09:24.120 user 0m0.128s 00:09:24.120 sys 0m0.078s 00:09:24.120 11:20:19 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.120 11:20:19 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:24.120 ************************************ 00:09:24.120 END TEST rpc_client 00:09:24.120 ************************************ 00:09:24.120 11:20:19 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:24.120 11:20:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:24.120 11:20:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.120 11:20:19 -- common/autotest_common.sh@10 -- # set +x 00:09:24.120 ************************************ 00:09:24.120 START TEST json_config 00:09:24.120 ************************************ 00:09:24.120 11:20:19 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:24.379 11:20:19 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:24.379 11:20:19 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:09:24.379 11:20:19 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:24.379 11:20:19 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:24.379 11:20:19 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.379 11:20:19 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.379 11:20:19 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.379 11:20:19 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.379 11:20:19 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.379 11:20:19 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.379 11:20:19 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.379 11:20:19 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.379 11:20:19 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.379 11:20:19 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.379 11:20:19 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.379 11:20:19 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:24.379 11:20:19 json_config -- scripts/common.sh@345 -- # : 1 00:09:24.379 11:20:19 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.379 11:20:19 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.379 11:20:19 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:24.379 11:20:19 json_config -- scripts/common.sh@353 -- # local d=1 00:09:24.379 11:20:19 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.379 11:20:19 json_config -- scripts/common.sh@355 -- # echo 1 00:09:24.379 11:20:19 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.379 11:20:19 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:24.379 11:20:19 json_config -- scripts/common.sh@353 -- # local d=2 00:09:24.379 11:20:19 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.379 11:20:19 json_config -- scripts/common.sh@355 -- # echo 2 00:09:24.379 11:20:19 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.379 11:20:19 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.379 11:20:19 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.379 11:20:19 json_config -- scripts/common.sh@368 -- # return 0 00:09:24.379 11:20:19 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.379 11:20:19 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:24.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.379 --rc genhtml_branch_coverage=1 00:09:24.379 --rc genhtml_function_coverage=1 00:09:24.379 --rc genhtml_legend=1 00:09:24.379 --rc geninfo_all_blocks=1 00:09:24.379 --rc geninfo_unexecuted_blocks=1 00:09:24.379 00:09:24.379 ' 00:09:24.379 11:20:19 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:24.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.379 --rc genhtml_branch_coverage=1 00:09:24.379 --rc genhtml_function_coverage=1 00:09:24.379 --rc genhtml_legend=1 00:09:24.379 --rc geninfo_all_blocks=1 00:09:24.379 --rc geninfo_unexecuted_blocks=1 00:09:24.379 00:09:24.379 ' 00:09:24.379 11:20:19 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:24.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.379 --rc genhtml_branch_coverage=1 00:09:24.379 --rc genhtml_function_coverage=1 00:09:24.379 --rc genhtml_legend=1 00:09:24.379 --rc geninfo_all_blocks=1 00:09:24.379 --rc geninfo_unexecuted_blocks=1 00:09:24.379 00:09:24.379 ' 00:09:24.379 11:20:19 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:24.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.379 --rc genhtml_branch_coverage=1 00:09:24.379 --rc genhtml_function_coverage=1 00:09:24.379 --rc genhtml_legend=1 00:09:24.379 --rc geninfo_all_blocks=1 00:09:24.379 --rc geninfo_unexecuted_blocks=1 00:09:24.379 00:09:24.379 ' 00:09:24.379 11:20:19 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:24.379 11:20:19 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:24.379 11:20:19 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.379 11:20:19 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.379 11:20:19 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.379 11:20:19 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.380 11:20:19 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.380 11:20:19 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.380 11:20:19 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.380 11:20:19 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.380 11:20:19 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.380 11:20:19 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.380 11:20:19 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.380 11:20:19 json_config -- paths/export.sh@5 -- # export PATH 00:09:24.380 11:20:19 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@51 -- # : 0 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.380 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.380 11:20:19 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:24.380 INFO: JSON configuration test init 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:09:24.380 11:20:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:24.380 11:20:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:09:24.380 11:20:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:24.380 11:20:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:24.380 11:20:19 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:09:24.380 11:20:19 json_config -- json_config/common.sh@9 -- # local app=target 00:09:24.380 11:20:19 json_config -- json_config/common.sh@10 -- # shift 00:09:24.380 11:20:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:24.380 11:20:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:24.380 11:20:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:24.380 11:20:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:24.380 11:20:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:24.380 11:20:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57319 00:09:24.380 Waiting for target to run... 00:09:24.380 11:20:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:24.380 11:20:19 json_config -- json_config/common.sh@25 -- # waitforlisten 57319 /var/tmp/spdk_tgt.sock 00:09:24.380 11:20:19 json_config -- common/autotest_common.sh@831 -- # '[' -z 57319 ']' 00:09:24.380 11:20:19 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:24.380 11:20:19 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.380 11:20:19 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:24.380 11:20:19 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:24.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:24.380 11:20:19 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.380 11:20:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:24.380 [2024-10-07 11:20:19.879010] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:09:24.380 [2024-10-07 11:20:19.879106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57319 ] 00:09:24.946 [2024-10-07 11:20:20.288709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.946 [2024-10-07 11:20:20.390700] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.513 11:20:20 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.513 11:20:20 json_config -- common/autotest_common.sh@864 -- # return 0 00:09:25.513 11:20:20 json_config -- json_config/common.sh@26 -- # echo '' 00:09:25.513 00:09:25.513 11:20:20 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:09:25.513 11:20:20 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:09:25.513 11:20:20 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:25.513 11:20:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:25.513 11:20:20 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:09:25.513 11:20:20 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:09:25.513 11:20:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:25.513 11:20:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:25.513 11:20:20 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:25.513 11:20:20 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:09:25.513 11:20:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:25.771 [2024-10-07 11:20:21.278202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:26.029 11:20:21 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:09:26.029 11:20:21 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:26.029 11:20:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:26.029 11:20:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:26.029 11:20:21 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:26.029 11:20:21 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:26.029 11:20:21 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:26.029 11:20:21 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:09:26.029 11:20:21 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:09:26.029 11:20:21 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:09:26.029 11:20:21 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:09:26.029 11:20:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@51 -- # local get_types 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@54 -- # sort 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:09:26.288 11:20:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:26.288 11:20:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@62 -- # return 0 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:09:26.288 11:20:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:26.288 11:20:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:09:26.288 11:20:21 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:26.288 11:20:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:26.546 MallocForNvmf0 00:09:26.804 11:20:22 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:26.804 11:20:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:26.804 MallocForNvmf1 00:09:27.062 11:20:22 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:09:27.062 11:20:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:09:27.062 [2024-10-07 11:20:22.574995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.321 11:20:22 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:27.321 11:20:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:27.580 11:20:22 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:27.580 11:20:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:27.839 11:20:23 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:27.839 11:20:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:28.097 11:20:23 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:28.097 11:20:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:28.356 [2024-10-07 11:20:23.715708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:28.356 11:20:23 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:09:28.356 11:20:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:28.356 11:20:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:28.356 11:20:23 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:09:28.356 11:20:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:28.356 11:20:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:28.356 11:20:23 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:09:28.356 11:20:23 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:28.356 11:20:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:28.614 MallocBdevForConfigChangeCheck 00:09:28.614 11:20:24 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:09:28.614 11:20:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:28.614 11:20:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:28.614 11:20:24 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:09:28.614 11:20:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:29.180 INFO: shutting down applications... 00:09:29.180 11:20:24 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:09:29.180 11:20:24 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:09:29.180 11:20:24 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:09:29.180 11:20:24 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:09:29.180 11:20:24 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:29.438 Calling clear_iscsi_subsystem 00:09:29.438 Calling clear_nvmf_subsystem 00:09:29.438 Calling clear_nbd_subsystem 00:09:29.438 Calling clear_ublk_subsystem 00:09:29.438 Calling clear_vhost_blk_subsystem 00:09:29.438 Calling clear_vhost_scsi_subsystem 00:09:29.438 Calling clear_bdev_subsystem 00:09:29.438 11:20:24 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:29.438 11:20:24 json_config -- json_config/json_config.sh@350 -- # count=100 00:09:29.438 11:20:24 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:09:29.438 11:20:24 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:29.438 11:20:24 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:29.438 11:20:24 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:30.004 11:20:25 json_config -- json_config/json_config.sh@352 -- # break 00:09:30.004 11:20:25 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:09:30.004 11:20:25 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:09:30.004 11:20:25 json_config -- json_config/common.sh@31 -- # local app=target 00:09:30.004 11:20:25 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:30.004 11:20:25 json_config -- json_config/common.sh@35 -- # [[ -n 57319 ]] 00:09:30.004 11:20:25 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57319 00:09:30.004 11:20:25 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:30.004 11:20:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:30.004 11:20:25 json_config -- json_config/common.sh@41 -- # kill -0 57319 00:09:30.004 11:20:25 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:30.293 11:20:25 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:30.293 11:20:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:30.293 11:20:25 json_config -- json_config/common.sh@41 -- # kill -0 57319 00:09:30.293 11:20:25 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:30.293 11:20:25 json_config -- json_config/common.sh@43 -- # break 00:09:30.293 11:20:25 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:30.293 SPDK target shutdown done 00:09:30.293 11:20:25 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:30.293 INFO: relaunching applications... 00:09:30.293 11:20:25 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:09:30.293 11:20:25 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:30.293 11:20:25 json_config -- json_config/common.sh@9 -- # local app=target 00:09:30.293 11:20:25 json_config -- json_config/common.sh@10 -- # shift 00:09:30.293 11:20:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:30.293 11:20:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:30.293 11:20:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:30.293 11:20:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:30.293 11:20:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:30.293 11:20:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57514 00:09:30.293 Waiting for target to run... 00:09:30.293 11:20:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:30.293 11:20:25 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:30.293 11:20:25 json_config -- json_config/common.sh@25 -- # waitforlisten 57514 /var/tmp/spdk_tgt.sock 00:09:30.293 11:20:25 json_config -- common/autotest_common.sh@831 -- # '[' -z 57514 ']' 00:09:30.293 11:20:25 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:30.293 11:20:25 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:30.293 11:20:25 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:30.293 11:20:25 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.293 11:20:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:30.552 [2024-10-07 11:20:25.854747] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:09:30.552 [2024-10-07 11:20:25.855412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57514 ] 00:09:30.811 [2024-10-07 11:20:26.277292] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.070 [2024-10-07 11:20:26.381485] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.070 [2024-10-07 11:20:26.520741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.328 [2024-10-07 11:20:26.739990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.328 [2024-10-07 11:20:26.772083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:31.328 11:20:26 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.328 11:20:26 json_config -- common/autotest_common.sh@864 -- # return 0 00:09:31.328 00:09:31.328 11:20:26 json_config -- json_config/common.sh@26 -- # echo '' 00:09:31.328 11:20:26 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:09:31.328 INFO: Checking if target configuration is the same... 00:09:31.328 11:20:26 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:31.328 11:20:26 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:31.328 11:20:26 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:09:31.328 11:20:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:31.328 + '[' 2 -ne 2 ']' 00:09:31.328 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:31.328 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:31.328 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:31.586 +++ basename /dev/fd/62 00:09:31.586 ++ mktemp /tmp/62.XXX 00:09:31.586 + tmp_file_1=/tmp/62.tL6 00:09:31.586 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:31.586 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:31.586 + tmp_file_2=/tmp/spdk_tgt_config.json.cTc 00:09:31.586 + ret=0 00:09:31.586 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:31.845 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:31.845 + diff -u /tmp/62.tL6 /tmp/spdk_tgt_config.json.cTc 00:09:31.845 + echo 'INFO: JSON config files are the same' 00:09:31.845 INFO: JSON config files are the same 00:09:31.845 + rm /tmp/62.tL6 /tmp/spdk_tgt_config.json.cTc 00:09:31.845 + exit 0 00:09:31.845 11:20:27 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:09:31.845 INFO: changing configuration and checking if this can be detected... 00:09:31.845 11:20:27 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:31.845 11:20:27 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:31.845 11:20:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:32.413 11:20:27 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:32.413 11:20:27 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:09:32.413 11:20:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:32.413 + '[' 2 -ne 2 ']' 00:09:32.413 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:32.413 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:32.413 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:32.413 +++ basename /dev/fd/62 00:09:32.413 ++ mktemp /tmp/62.XXX 00:09:32.413 + tmp_file_1=/tmp/62.u3D 00:09:32.413 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:32.413 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:32.413 + tmp_file_2=/tmp/spdk_tgt_config.json.iI5 00:09:32.413 + ret=0 00:09:32.413 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:32.672 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:32.672 + diff -u /tmp/62.u3D /tmp/spdk_tgt_config.json.iI5 00:09:32.672 + ret=1 00:09:32.672 + echo '=== Start of file: /tmp/62.u3D ===' 00:09:32.672 + cat /tmp/62.u3D 00:09:32.931 + echo '=== End of file: /tmp/62.u3D ===' 00:09:32.931 + echo '' 00:09:32.931 + echo '=== Start of file: /tmp/spdk_tgt_config.json.iI5 ===' 00:09:32.931 + cat /tmp/spdk_tgt_config.json.iI5 00:09:32.931 + echo '=== End of file: /tmp/spdk_tgt_config.json.iI5 ===' 00:09:32.931 + echo '' 00:09:32.931 + rm /tmp/62.u3D /tmp/spdk_tgt_config.json.iI5 00:09:32.931 + exit 1 00:09:32.931 INFO: configuration change detected. 00:09:32.931 11:20:28 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:09:32.931 11:20:28 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:09:32.931 11:20:28 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:09:32.931 11:20:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:32.932 11:20:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:32.932 11:20:28 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:09:32.932 11:20:28 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:09:32.932 11:20:28 json_config -- json_config/json_config.sh@324 -- # [[ -n 57514 ]] 00:09:32.932 11:20:28 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:09:32.932 11:20:28 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:09:32.932 11:20:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:32.932 11:20:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:32.932 11:20:28 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:09:32.932 11:20:28 json_config -- json_config/json_config.sh@200 -- # uname -s 00:09:32.932 11:20:28 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:09:32.932 11:20:28 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:09:32.932 11:20:28 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:09:32.932 11:20:28 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:09:32.932 11:20:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:32.932 11:20:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:32.932 11:20:28 json_config -- json_config/json_config.sh@330 -- # killprocess 57514 00:09:32.932 11:20:28 json_config -- common/autotest_common.sh@950 -- # '[' -z 57514 ']' 00:09:32.932 11:20:28 json_config -- common/autotest_common.sh@954 -- # kill -0 57514 00:09:32.932 11:20:28 json_config -- common/autotest_common.sh@955 -- # uname 00:09:32.932 11:20:28 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.932 11:20:28 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57514 00:09:32.932 11:20:28 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.932 11:20:28 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.932 killing process with pid 57514 00:09:32.932 11:20:28 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57514' 00:09:32.932 11:20:28 json_config -- common/autotest_common.sh@969 -- # kill 57514 00:09:32.932 11:20:28 json_config -- common/autotest_common.sh@974 -- # wait 57514 00:09:33.190 11:20:28 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:33.190 11:20:28 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:09:33.190 11:20:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:33.190 11:20:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:33.190 11:20:28 json_config -- json_config/json_config.sh@335 -- # return 0 00:09:33.190 INFO: Success 00:09:33.190 11:20:28 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:09:33.190 00:09:33.190 real 0m8.991s 00:09:33.190 user 0m13.055s 00:09:33.190 sys 0m1.811s 00:09:33.191 11:20:28 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.191 11:20:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:33.191 ************************************ 00:09:33.191 END TEST json_config 00:09:33.191 ************************************ 00:09:33.191 11:20:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:33.191 11:20:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:33.191 11:20:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.191 11:20:28 -- common/autotest_common.sh@10 -- # set +x 00:09:33.191 ************************************ 00:09:33.191 START TEST json_config_extra_key 00:09:33.191 ************************************ 00:09:33.191 11:20:28 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:33.449 11:20:28 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:33.449 11:20:28 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:09:33.449 11:20:28 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:33.449 11:20:28 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:33.450 11:20:28 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.450 11:20:28 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:33.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.450 --rc genhtml_branch_coverage=1 00:09:33.450 --rc genhtml_function_coverage=1 00:09:33.450 --rc genhtml_legend=1 00:09:33.450 --rc geninfo_all_blocks=1 00:09:33.450 --rc geninfo_unexecuted_blocks=1 00:09:33.450 00:09:33.450 ' 00:09:33.450 11:20:28 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:33.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.450 --rc genhtml_branch_coverage=1 00:09:33.450 --rc genhtml_function_coverage=1 00:09:33.450 --rc genhtml_legend=1 00:09:33.450 --rc geninfo_all_blocks=1 00:09:33.450 --rc geninfo_unexecuted_blocks=1 00:09:33.450 00:09:33.450 ' 00:09:33.450 11:20:28 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:33.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.450 --rc genhtml_branch_coverage=1 00:09:33.450 --rc genhtml_function_coverage=1 00:09:33.450 --rc genhtml_legend=1 00:09:33.450 --rc geninfo_all_blocks=1 00:09:33.450 --rc geninfo_unexecuted_blocks=1 00:09:33.450 00:09:33.450 ' 00:09:33.450 11:20:28 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:33.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.450 --rc genhtml_branch_coverage=1 00:09:33.450 --rc genhtml_function_coverage=1 00:09:33.450 --rc genhtml_legend=1 00:09:33.450 --rc geninfo_all_blocks=1 00:09:33.450 --rc geninfo_unexecuted_blocks=1 00:09:33.450 00:09:33.450 ' 00:09:33.450 11:20:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.450 11:20:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.450 11:20:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.450 11:20:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.450 11:20:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.450 11:20:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:33.450 11:20:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.450 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.450 11:20:28 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.450 11:20:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:33.450 11:20:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:33.450 11:20:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:33.450 11:20:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:33.450 11:20:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:33.450 11:20:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:33.450 11:20:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:33.450 11:20:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:33.450 11:20:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:33.450 11:20:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:33.450 INFO: launching applications... 00:09:33.450 11:20:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:33.450 11:20:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:33.450 11:20:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:33.450 11:20:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:33.450 11:20:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:33.450 11:20:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:33.450 11:20:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:33.450 11:20:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:33.451 11:20:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:33.451 11:20:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57668 00:09:33.451 Waiting for target to run... 00:09:33.451 11:20:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:33.451 11:20:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57668 /var/tmp/spdk_tgt.sock 00:09:33.451 11:20:28 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57668 ']' 00:09:33.451 11:20:28 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:33.451 11:20:28 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:33.451 11:20:28 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:33.451 11:20:28 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.451 11:20:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:33.451 11:20:28 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:33.451 [2024-10-07 11:20:28.920821] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:09:33.451 [2024-10-07 11:20:28.921848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57668 ] 00:09:34.018 [2024-10-07 11:20:29.355105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.018 [2024-10-07 11:20:29.476445] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.018 [2024-10-07 11:20:29.508335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:34.585 11:20:29 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.585 00:09:34.585 11:20:29 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:09:34.585 11:20:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:34.585 INFO: shutting down applications... 00:09:34.585 11:20:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:34.585 11:20:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:34.585 11:20:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:34.585 11:20:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:34.585 11:20:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57668 ]] 00:09:34.585 11:20:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57668 00:09:34.585 11:20:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:34.585 11:20:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:34.585 11:20:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57668 00:09:34.585 11:20:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:35.182 11:20:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:35.182 11:20:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:35.182 11:20:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57668 00:09:35.182 11:20:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:35.182 11:20:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:35.182 11:20:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:35.182 SPDK target shutdown done 00:09:35.182 11:20:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:35.182 Success 00:09:35.182 11:20:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:35.182 00:09:35.182 real 0m1.833s 00:09:35.182 user 0m1.807s 00:09:35.182 sys 0m0.478s 00:09:35.182 11:20:30 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.182 11:20:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:35.182 ************************************ 00:09:35.182 END TEST json_config_extra_key 00:09:35.182 ************************************ 00:09:35.182 11:20:30 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:35.182 11:20:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:35.182 11:20:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.182 11:20:30 -- common/autotest_common.sh@10 -- # set +x 00:09:35.182 ************************************ 00:09:35.182 START TEST alias_rpc 00:09:35.182 ************************************ 00:09:35.182 11:20:30 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:35.182 * Looking for test storage... 00:09:35.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:35.182 11:20:30 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:35.182 11:20:30 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:35.182 11:20:30 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:35.441 11:20:30 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.441 11:20:30 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:35.441 11:20:30 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.441 11:20:30 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:35.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.441 --rc genhtml_branch_coverage=1 00:09:35.441 --rc genhtml_function_coverage=1 00:09:35.441 --rc genhtml_legend=1 00:09:35.441 --rc geninfo_all_blocks=1 00:09:35.441 --rc geninfo_unexecuted_blocks=1 00:09:35.441 00:09:35.441 ' 00:09:35.441 11:20:30 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:35.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.441 --rc genhtml_branch_coverage=1 00:09:35.441 --rc genhtml_function_coverage=1 00:09:35.441 --rc genhtml_legend=1 00:09:35.441 --rc geninfo_all_blocks=1 00:09:35.441 --rc geninfo_unexecuted_blocks=1 00:09:35.441 00:09:35.441 ' 00:09:35.441 11:20:30 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:35.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.441 --rc genhtml_branch_coverage=1 00:09:35.441 --rc genhtml_function_coverage=1 00:09:35.441 --rc genhtml_legend=1 00:09:35.441 --rc geninfo_all_blocks=1 00:09:35.441 --rc geninfo_unexecuted_blocks=1 00:09:35.441 00:09:35.441 ' 00:09:35.441 11:20:30 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:35.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.441 --rc genhtml_branch_coverage=1 00:09:35.441 --rc genhtml_function_coverage=1 00:09:35.441 --rc genhtml_legend=1 00:09:35.441 --rc geninfo_all_blocks=1 00:09:35.441 --rc geninfo_unexecuted_blocks=1 00:09:35.441 00:09:35.441 ' 00:09:35.441 11:20:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:35.441 11:20:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57746 00:09:35.441 11:20:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57746 00:09:35.441 11:20:30 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57746 ']' 00:09:35.442 11:20:30 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.442 11:20:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:35.442 11:20:30 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.442 11:20:30 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.442 11:20:30 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.442 11:20:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.442 [2024-10-07 11:20:30.850982] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:09:35.442 [2024-10-07 11:20:30.851099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57746 ] 00:09:35.701 [2024-10-07 11:20:30.991254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.701 [2024-10-07 11:20:31.107624] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.701 [2024-10-07 11:20:31.184429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.636 11:20:31 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.636 11:20:31 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:36.636 11:20:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:36.894 11:20:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57746 00:09:36.894 11:20:32 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57746 ']' 00:09:36.894 11:20:32 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57746 00:09:36.894 11:20:32 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:09:36.894 11:20:32 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:36.894 11:20:32 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57746 00:09:36.894 11:20:32 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:36.894 killing process with pid 57746 00:09:36.894 11:20:32 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:36.894 11:20:32 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57746' 00:09:36.894 11:20:32 alias_rpc -- common/autotest_common.sh@969 -- # kill 57746 00:09:36.894 11:20:32 alias_rpc -- common/autotest_common.sh@974 -- # wait 57746 00:09:37.157 00:09:37.157 real 0m2.072s 00:09:37.157 user 0m2.375s 00:09:37.157 sys 0m0.469s 00:09:37.157 11:20:32 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.157 11:20:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.157 ************************************ 00:09:37.157 END TEST alias_rpc 00:09:37.157 ************************************ 00:09:37.157 11:20:32 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:37.157 11:20:32 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:37.157 11:20:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:37.157 11:20:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.157 11:20:32 -- common/autotest_common.sh@10 -- # set +x 00:09:37.157 ************************************ 00:09:37.157 START TEST spdkcli_tcp 00:09:37.157 ************************************ 00:09:37.157 11:20:32 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:37.417 * Looking for test storage... 00:09:37.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:37.417 11:20:32 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:37.417 11:20:32 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:37.417 11:20:32 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:37.417 11:20:32 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.417 11:20:32 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:37.417 11:20:32 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.417 11:20:32 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:37.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.417 --rc genhtml_branch_coverage=1 00:09:37.417 --rc genhtml_function_coverage=1 00:09:37.417 --rc genhtml_legend=1 00:09:37.417 --rc geninfo_all_blocks=1 00:09:37.417 --rc geninfo_unexecuted_blocks=1 00:09:37.417 00:09:37.417 ' 00:09:37.417 11:20:32 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:37.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.417 --rc genhtml_branch_coverage=1 00:09:37.417 --rc genhtml_function_coverage=1 00:09:37.417 --rc genhtml_legend=1 00:09:37.417 --rc geninfo_all_blocks=1 00:09:37.417 --rc geninfo_unexecuted_blocks=1 00:09:37.417 00:09:37.417 ' 00:09:37.417 11:20:32 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:37.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.417 --rc genhtml_branch_coverage=1 00:09:37.417 --rc genhtml_function_coverage=1 00:09:37.417 --rc genhtml_legend=1 00:09:37.417 --rc geninfo_all_blocks=1 00:09:37.417 --rc geninfo_unexecuted_blocks=1 00:09:37.417 00:09:37.417 ' 00:09:37.417 11:20:32 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:37.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.417 --rc genhtml_branch_coverage=1 00:09:37.417 --rc genhtml_function_coverage=1 00:09:37.417 --rc genhtml_legend=1 00:09:37.417 --rc geninfo_all_blocks=1 00:09:37.417 --rc geninfo_unexecuted_blocks=1 00:09:37.417 00:09:37.417 ' 00:09:37.417 11:20:32 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:37.417 11:20:32 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:37.417 11:20:32 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:37.417 11:20:32 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:37.417 11:20:32 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:37.417 11:20:32 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:37.417 11:20:32 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:37.417 11:20:32 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:37.417 11:20:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:37.417 11:20:32 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57830 00:09:37.417 11:20:32 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:37.417 11:20:32 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57830 00:09:37.417 11:20:32 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57830 ']' 00:09:37.417 11:20:32 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.417 11:20:32 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:37.417 11:20:32 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.417 11:20:32 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:37.417 11:20:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:37.417 [2024-10-07 11:20:32.927217] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:09:37.417 [2024-10-07 11:20:32.927580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57830 ] 00:09:37.678 [2024-10-07 11:20:33.062255] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:37.678 [2024-10-07 11:20:33.192799] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.678 [2024-10-07 11:20:33.192817] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.942 [2024-10-07 11:20:33.271478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:38.509 11:20:33 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:38.509 11:20:33 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:09:38.509 11:20:33 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57847 00:09:38.509 11:20:33 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:38.509 11:20:33 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:38.768 [ 00:09:38.768 "bdev_malloc_delete", 00:09:38.768 "bdev_malloc_create", 00:09:38.768 "bdev_null_resize", 00:09:38.768 "bdev_null_delete", 00:09:38.768 "bdev_null_create", 00:09:38.768 "bdev_nvme_cuse_unregister", 00:09:38.768 "bdev_nvme_cuse_register", 00:09:38.768 "bdev_opal_new_user", 00:09:38.768 "bdev_opal_set_lock_state", 00:09:38.768 "bdev_opal_delete", 00:09:38.768 "bdev_opal_get_info", 00:09:38.768 "bdev_opal_create", 00:09:38.768 "bdev_nvme_opal_revert", 00:09:38.768 "bdev_nvme_opal_init", 00:09:38.768 "bdev_nvme_send_cmd", 00:09:38.768 "bdev_nvme_set_keys", 00:09:38.768 "bdev_nvme_get_path_iostat", 00:09:38.768 "bdev_nvme_get_mdns_discovery_info", 00:09:38.768 "bdev_nvme_stop_mdns_discovery", 00:09:38.768 "bdev_nvme_start_mdns_discovery", 00:09:38.768 "bdev_nvme_set_multipath_policy", 00:09:38.768 "bdev_nvme_set_preferred_path", 00:09:38.768 "bdev_nvme_get_io_paths", 00:09:38.768 "bdev_nvme_remove_error_injection", 00:09:38.768 "bdev_nvme_add_error_injection", 00:09:38.768 "bdev_nvme_get_discovery_info", 00:09:38.768 "bdev_nvme_stop_discovery", 00:09:38.768 "bdev_nvme_start_discovery", 00:09:38.768 "bdev_nvme_get_controller_health_info", 00:09:38.768 "bdev_nvme_disable_controller", 00:09:38.768 "bdev_nvme_enable_controller", 00:09:38.768 "bdev_nvme_reset_controller", 00:09:38.768 "bdev_nvme_get_transport_statistics", 00:09:38.768 "bdev_nvme_apply_firmware", 00:09:38.768 "bdev_nvme_detach_controller", 00:09:38.768 "bdev_nvme_get_controllers", 00:09:38.768 "bdev_nvme_attach_controller", 00:09:38.768 "bdev_nvme_set_hotplug", 00:09:38.768 "bdev_nvme_set_options", 00:09:38.768 "bdev_passthru_delete", 00:09:38.768 "bdev_passthru_create", 00:09:38.768 "bdev_lvol_set_parent_bdev", 00:09:38.768 "bdev_lvol_set_parent", 00:09:38.768 "bdev_lvol_check_shallow_copy", 00:09:38.768 "bdev_lvol_start_shallow_copy", 00:09:38.768 "bdev_lvol_grow_lvstore", 00:09:38.768 "bdev_lvol_get_lvols", 00:09:38.768 "bdev_lvol_get_lvstores", 00:09:38.768 "bdev_lvol_delete", 00:09:38.768 "bdev_lvol_set_read_only", 00:09:38.768 "bdev_lvol_resize", 00:09:38.768 "bdev_lvol_decouple_parent", 00:09:38.768 "bdev_lvol_inflate", 00:09:38.768 "bdev_lvol_rename", 00:09:38.768 "bdev_lvol_clone_bdev", 00:09:38.768 "bdev_lvol_clone", 00:09:38.768 "bdev_lvol_snapshot", 00:09:38.768 "bdev_lvol_create", 00:09:38.768 "bdev_lvol_delete_lvstore", 00:09:38.768 "bdev_lvol_rename_lvstore", 00:09:38.768 "bdev_lvol_create_lvstore", 00:09:38.768 "bdev_raid_set_options", 00:09:38.768 "bdev_raid_remove_base_bdev", 00:09:38.768 "bdev_raid_add_base_bdev", 00:09:38.768 "bdev_raid_delete", 00:09:38.768 "bdev_raid_create", 00:09:38.768 "bdev_raid_get_bdevs", 00:09:38.768 "bdev_error_inject_error", 00:09:38.768 "bdev_error_delete", 00:09:38.768 "bdev_error_create", 00:09:38.768 "bdev_split_delete", 00:09:38.768 "bdev_split_create", 00:09:38.768 "bdev_delay_delete", 00:09:38.768 "bdev_delay_create", 00:09:38.768 "bdev_delay_update_latency", 00:09:38.768 "bdev_zone_block_delete", 00:09:38.768 "bdev_zone_block_create", 00:09:38.768 "blobfs_create", 00:09:38.768 "blobfs_detect", 00:09:38.768 "blobfs_set_cache_size", 00:09:38.768 "bdev_aio_delete", 00:09:38.768 "bdev_aio_rescan", 00:09:38.768 "bdev_aio_create", 00:09:38.768 "bdev_ftl_set_property", 00:09:38.768 "bdev_ftl_get_properties", 00:09:38.768 "bdev_ftl_get_stats", 00:09:38.768 "bdev_ftl_unmap", 00:09:38.768 "bdev_ftl_unload", 00:09:38.768 "bdev_ftl_delete", 00:09:38.768 "bdev_ftl_load", 00:09:38.768 "bdev_ftl_create", 00:09:38.768 "bdev_virtio_attach_controller", 00:09:38.769 "bdev_virtio_scsi_get_devices", 00:09:38.769 "bdev_virtio_detach_controller", 00:09:38.769 "bdev_virtio_blk_set_hotplug", 00:09:38.769 "bdev_iscsi_delete", 00:09:38.769 "bdev_iscsi_create", 00:09:38.769 "bdev_iscsi_set_options", 00:09:38.769 "bdev_uring_delete", 00:09:38.769 "bdev_uring_rescan", 00:09:38.769 "bdev_uring_create", 00:09:38.769 "accel_error_inject_error", 00:09:38.769 "ioat_scan_accel_module", 00:09:38.769 "dsa_scan_accel_module", 00:09:38.769 "iaa_scan_accel_module", 00:09:38.769 "keyring_file_remove_key", 00:09:38.769 "keyring_file_add_key", 00:09:38.769 "keyring_linux_set_options", 00:09:38.769 "fsdev_aio_delete", 00:09:38.769 "fsdev_aio_create", 00:09:38.769 "iscsi_get_histogram", 00:09:38.769 "iscsi_enable_histogram", 00:09:38.769 "iscsi_set_options", 00:09:38.769 "iscsi_get_auth_groups", 00:09:38.769 "iscsi_auth_group_remove_secret", 00:09:38.769 "iscsi_auth_group_add_secret", 00:09:38.769 "iscsi_delete_auth_group", 00:09:38.769 "iscsi_create_auth_group", 00:09:38.769 "iscsi_set_discovery_auth", 00:09:38.769 "iscsi_get_options", 00:09:38.769 "iscsi_target_node_request_logout", 00:09:38.769 "iscsi_target_node_set_redirect", 00:09:38.769 "iscsi_target_node_set_auth", 00:09:38.769 "iscsi_target_node_add_lun", 00:09:38.769 "iscsi_get_stats", 00:09:38.769 "iscsi_get_connections", 00:09:38.769 "iscsi_portal_group_set_auth", 00:09:38.769 "iscsi_start_portal_group", 00:09:38.769 "iscsi_delete_portal_group", 00:09:38.769 "iscsi_create_portal_group", 00:09:38.769 "iscsi_get_portal_groups", 00:09:38.769 "iscsi_delete_target_node", 00:09:38.769 "iscsi_target_node_remove_pg_ig_maps", 00:09:38.769 "iscsi_target_node_add_pg_ig_maps", 00:09:38.769 "iscsi_create_target_node", 00:09:38.769 "iscsi_get_target_nodes", 00:09:38.769 "iscsi_delete_initiator_group", 00:09:38.769 "iscsi_initiator_group_remove_initiators", 00:09:38.769 "iscsi_initiator_group_add_initiators", 00:09:38.769 "iscsi_create_initiator_group", 00:09:38.769 "iscsi_get_initiator_groups", 00:09:38.769 "nvmf_set_crdt", 00:09:38.769 "nvmf_set_config", 00:09:38.769 "nvmf_set_max_subsystems", 00:09:38.769 "nvmf_stop_mdns_prr", 00:09:38.769 "nvmf_publish_mdns_prr", 00:09:38.769 "nvmf_subsystem_get_listeners", 00:09:38.769 "nvmf_subsystem_get_qpairs", 00:09:38.769 "nvmf_subsystem_get_controllers", 00:09:38.769 "nvmf_get_stats", 00:09:38.769 "nvmf_get_transports", 00:09:38.769 "nvmf_create_transport", 00:09:38.769 "nvmf_get_targets", 00:09:38.769 "nvmf_delete_target", 00:09:38.769 "nvmf_create_target", 00:09:38.769 "nvmf_subsystem_allow_any_host", 00:09:38.769 "nvmf_subsystem_set_keys", 00:09:38.769 "nvmf_subsystem_remove_host", 00:09:38.769 "nvmf_subsystem_add_host", 00:09:38.769 "nvmf_ns_remove_host", 00:09:38.769 "nvmf_ns_add_host", 00:09:38.769 "nvmf_subsystem_remove_ns", 00:09:38.769 "nvmf_subsystem_set_ns_ana_group", 00:09:38.769 "nvmf_subsystem_add_ns", 00:09:38.769 "nvmf_subsystem_listener_set_ana_state", 00:09:38.769 "nvmf_discovery_get_referrals", 00:09:38.769 "nvmf_discovery_remove_referral", 00:09:38.769 "nvmf_discovery_add_referral", 00:09:38.769 "nvmf_subsystem_remove_listener", 00:09:38.769 "nvmf_subsystem_add_listener", 00:09:38.769 "nvmf_delete_subsystem", 00:09:38.769 "nvmf_create_subsystem", 00:09:38.769 "nvmf_get_subsystems", 00:09:38.769 "env_dpdk_get_mem_stats", 00:09:38.769 "nbd_get_disks", 00:09:38.769 "nbd_stop_disk", 00:09:38.769 "nbd_start_disk", 00:09:38.769 "ublk_recover_disk", 00:09:38.769 "ublk_get_disks", 00:09:38.769 "ublk_stop_disk", 00:09:38.769 "ublk_start_disk", 00:09:38.769 "ublk_destroy_target", 00:09:38.769 "ublk_create_target", 00:09:38.769 "virtio_blk_create_transport", 00:09:38.769 "virtio_blk_get_transports", 00:09:38.769 "vhost_controller_set_coalescing", 00:09:38.769 "vhost_get_controllers", 00:09:38.769 "vhost_delete_controller", 00:09:38.769 "vhost_create_blk_controller", 00:09:38.769 "vhost_scsi_controller_remove_target", 00:09:38.769 "vhost_scsi_controller_add_target", 00:09:38.769 "vhost_start_scsi_controller", 00:09:38.769 "vhost_create_scsi_controller", 00:09:38.769 "thread_set_cpumask", 00:09:38.769 "scheduler_set_options", 00:09:38.769 "framework_get_governor", 00:09:38.769 "framework_get_scheduler", 00:09:38.769 "framework_set_scheduler", 00:09:38.769 "framework_get_reactors", 00:09:38.769 "thread_get_io_channels", 00:09:38.769 "thread_get_pollers", 00:09:38.769 "thread_get_stats", 00:09:38.769 "framework_monitor_context_switch", 00:09:38.769 "spdk_kill_instance", 00:09:38.769 "log_enable_timestamps", 00:09:38.769 "log_get_flags", 00:09:38.769 "log_clear_flag", 00:09:38.769 "log_set_flag", 00:09:38.769 "log_get_level", 00:09:38.769 "log_set_level", 00:09:38.769 "log_get_print_level", 00:09:38.769 "log_set_print_level", 00:09:38.769 "framework_enable_cpumask_locks", 00:09:38.769 "framework_disable_cpumask_locks", 00:09:38.769 "framework_wait_init", 00:09:38.769 "framework_start_init", 00:09:38.769 "scsi_get_devices", 00:09:38.769 "bdev_get_histogram", 00:09:38.769 "bdev_enable_histogram", 00:09:38.769 "bdev_set_qos_limit", 00:09:38.769 "bdev_set_qd_sampling_period", 00:09:38.769 "bdev_get_bdevs", 00:09:38.769 "bdev_reset_iostat", 00:09:38.769 "bdev_get_iostat", 00:09:38.769 "bdev_examine", 00:09:38.769 "bdev_wait_for_examine", 00:09:38.769 "bdev_set_options", 00:09:38.769 "accel_get_stats", 00:09:38.769 "accel_set_options", 00:09:38.769 "accel_set_driver", 00:09:38.769 "accel_crypto_key_destroy", 00:09:38.769 "accel_crypto_keys_get", 00:09:38.769 "accel_crypto_key_create", 00:09:38.769 "accel_assign_opc", 00:09:38.769 "accel_get_module_info", 00:09:38.769 "accel_get_opc_assignments", 00:09:38.769 "vmd_rescan", 00:09:38.769 "vmd_remove_device", 00:09:38.769 "vmd_enable", 00:09:38.769 "sock_get_default_impl", 00:09:38.769 "sock_set_default_impl", 00:09:38.769 "sock_impl_set_options", 00:09:38.769 "sock_impl_get_options", 00:09:38.769 "iobuf_get_stats", 00:09:38.769 "iobuf_set_options", 00:09:38.769 "keyring_get_keys", 00:09:38.769 "framework_get_pci_devices", 00:09:38.769 "framework_get_config", 00:09:38.769 "framework_get_subsystems", 00:09:38.769 "fsdev_set_opts", 00:09:38.769 "fsdev_get_opts", 00:09:38.769 "trace_get_info", 00:09:38.769 "trace_get_tpoint_group_mask", 00:09:38.769 "trace_disable_tpoint_group", 00:09:38.769 "trace_enable_tpoint_group", 00:09:38.769 "trace_clear_tpoint_mask", 00:09:38.769 "trace_set_tpoint_mask", 00:09:38.769 "notify_get_notifications", 00:09:38.769 "notify_get_types", 00:09:38.769 "spdk_get_version", 00:09:38.769 "rpc_get_methods" 00:09:38.769 ] 00:09:38.769 11:20:34 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:38.769 11:20:34 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:38.769 11:20:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:38.769 11:20:34 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:38.769 11:20:34 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57830 00:09:38.769 11:20:34 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57830 ']' 00:09:38.769 11:20:34 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57830 00:09:39.028 11:20:34 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:09:39.028 11:20:34 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:39.028 11:20:34 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57830 00:09:39.028 killing process with pid 57830 00:09:39.028 11:20:34 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:39.028 11:20:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:39.028 11:20:34 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57830' 00:09:39.028 11:20:34 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57830 00:09:39.028 11:20:34 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57830 00:09:39.287 ************************************ 00:09:39.287 END TEST spdkcli_tcp 00:09:39.287 ************************************ 00:09:39.287 00:09:39.287 real 0m2.082s 00:09:39.287 user 0m3.814s 00:09:39.287 sys 0m0.526s 00:09:39.287 11:20:34 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.287 11:20:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:39.287 11:20:34 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:39.287 11:20:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:39.287 11:20:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.287 11:20:34 -- common/autotest_common.sh@10 -- # set +x 00:09:39.287 ************************************ 00:09:39.287 START TEST dpdk_mem_utility 00:09:39.287 ************************************ 00:09:39.287 11:20:34 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:39.546 * Looking for test storage... 00:09:39.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:39.546 11:20:34 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:39.546 11:20:34 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:09:39.546 11:20:34 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:39.546 11:20:34 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:39.546 11:20:34 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.547 11:20:34 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:39.547 11:20:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:39.547 11:20:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.547 11:20:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:39.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.547 11:20:34 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.547 11:20:34 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.547 11:20:34 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.547 11:20:34 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:39.547 11:20:34 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.547 11:20:34 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:39.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.547 --rc genhtml_branch_coverage=1 00:09:39.547 --rc genhtml_function_coverage=1 00:09:39.547 --rc genhtml_legend=1 00:09:39.547 --rc geninfo_all_blocks=1 00:09:39.547 --rc geninfo_unexecuted_blocks=1 00:09:39.547 00:09:39.547 ' 00:09:39.547 11:20:34 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:39.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.547 --rc genhtml_branch_coverage=1 00:09:39.547 --rc genhtml_function_coverage=1 00:09:39.547 --rc genhtml_legend=1 00:09:39.547 --rc geninfo_all_blocks=1 00:09:39.547 --rc geninfo_unexecuted_blocks=1 00:09:39.547 00:09:39.547 ' 00:09:39.547 11:20:34 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:39.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.547 --rc genhtml_branch_coverage=1 00:09:39.547 --rc genhtml_function_coverage=1 00:09:39.547 --rc genhtml_legend=1 00:09:39.547 --rc geninfo_all_blocks=1 00:09:39.547 --rc geninfo_unexecuted_blocks=1 00:09:39.547 00:09:39.547 ' 00:09:39.547 11:20:34 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:39.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.547 --rc genhtml_branch_coverage=1 00:09:39.547 --rc genhtml_function_coverage=1 00:09:39.547 --rc genhtml_legend=1 00:09:39.547 --rc geninfo_all_blocks=1 00:09:39.547 --rc geninfo_unexecuted_blocks=1 00:09:39.547 00:09:39.547 ' 00:09:39.547 11:20:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:39.547 11:20:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57929 00:09:39.547 11:20:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:39.547 11:20:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57929 00:09:39.547 11:20:34 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 57929 ']' 00:09:39.547 11:20:34 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.547 11:20:34 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.547 11:20:34 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.547 11:20:34 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.547 11:20:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:39.547 [2024-10-07 11:20:35.050192] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:09:39.547 [2024-10-07 11:20:35.050815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57929 ] 00:09:39.806 [2024-10-07 11:20:35.184063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.806 [2024-10-07 11:20:35.303688] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.065 [2024-10-07 11:20:35.380286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:40.632 11:20:36 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.632 11:20:36 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:09:40.633 11:20:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:40.633 11:20:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:40.633 11:20:36 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.633 11:20:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:40.633 { 00:09:40.633 "filename": "/tmp/spdk_mem_dump.txt" 00:09:40.633 } 00:09:40.633 11:20:36 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.633 11:20:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:40.893 DPDK memory size 860.000000 MiB in 1 heap(s) 00:09:40.893 1 heaps totaling size 860.000000 MiB 00:09:40.893 size: 860.000000 MiB heap id: 0 00:09:40.893 end heaps---------- 00:09:40.893 9 mempools totaling size 642.649841 MiB 00:09:40.893 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:40.893 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:40.893 size: 92.545471 MiB name: bdev_io_57929 00:09:40.893 size: 51.011292 MiB name: evtpool_57929 00:09:40.893 size: 50.003479 MiB name: msgpool_57929 00:09:40.893 size: 36.509338 MiB name: fsdev_io_57929 00:09:40.893 size: 21.763794 MiB name: PDU_Pool 00:09:40.893 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:40.893 size: 0.026123 MiB name: Session_Pool 00:09:40.893 end mempools------- 00:09:40.893 6 memzones totaling size 4.142822 MiB 00:09:40.893 size: 1.000366 MiB name: RG_ring_0_57929 00:09:40.893 size: 1.000366 MiB name: RG_ring_1_57929 00:09:40.893 size: 1.000366 MiB name: RG_ring_4_57929 00:09:40.893 size: 1.000366 MiB name: RG_ring_5_57929 00:09:40.893 size: 0.125366 MiB name: RG_ring_2_57929 00:09:40.893 size: 0.015991 MiB name: RG_ring_3_57929 00:09:40.893 end memzones------- 00:09:40.893 11:20:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:40.893 heap id: 0 total size: 860.000000 MiB number of busy elements: 308 number of free elements: 16 00:09:40.893 list of free elements. size: 13.936340 MiB 00:09:40.893 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:40.893 element at address: 0x200000800000 with size: 1.996948 MiB 00:09:40.893 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:09:40.893 element at address: 0x20001be00000 with size: 0.999878 MiB 00:09:40.893 element at address: 0x200034a00000 with size: 0.994446 MiB 00:09:40.893 element at address: 0x200009600000 with size: 0.959839 MiB 00:09:40.893 element at address: 0x200015e00000 with size: 0.954285 MiB 00:09:40.893 element at address: 0x20001c000000 with size: 0.936584 MiB 00:09:40.893 element at address: 0x200000200000 with size: 0.835022 MiB 00:09:40.894 element at address: 0x20001d800000 with size: 0.567505 MiB 00:09:40.894 element at address: 0x20000d800000 with size: 0.489624 MiB 00:09:40.894 element at address: 0x200003e00000 with size: 0.487732 MiB 00:09:40.894 element at address: 0x20001c200000 with size: 0.485657 MiB 00:09:40.894 element at address: 0x200007000000 with size: 0.480286 MiB 00:09:40.894 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:09:40.894 element at address: 0x200003a00000 with size: 0.353394 MiB 00:09:40.894 list of standard malloc elements. size: 199.266968 MiB 00:09:40.894 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:09:40.894 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:09:40.894 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:09:40.894 element at address: 0x20001befff80 with size: 1.000122 MiB 00:09:40.894 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:09:40.894 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:40.894 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:09:40.894 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:40.894 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:09:40.894 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:40.894 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003a5a780 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003a5ec40 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003aff940 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003affb40 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:09:40.894 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:09:40.895 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:09:40.895 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:09:40.895 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:09:40.895 element at address: 0x200003eff000 with size: 0.000183 MiB 00:09:40.895 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000707af40 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000707b000 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000707b180 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000707b240 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000707b300 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000707b480 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000707b540 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000707b600 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:09:40.895 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:09:40.895 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d891480 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d891540 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d891600 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d891780 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d891840 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d891900 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d892080 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d892140 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d892200 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d892380 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d892440 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d892500 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d892680 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d892740 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d892800 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d892980 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d893040 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d893100 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d893280 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d893340 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d893400 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d893580 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d893640 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d893700 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d893880 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d893940 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d894000 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d894180 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d894240 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d894300 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d894480 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d894540 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d894600 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d894780 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d894840 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d894900 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d895080 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d895140 with size: 0.000183 MiB 00:09:40.895 element at address: 0x20001d895200 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20001d895380 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20001d895440 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:09:40.896 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:09:40.896 list of memzone associated elements. size: 646.796692 MiB 00:09:40.896 element at address: 0x20001d895500 with size: 211.416748 MiB 00:09:40.896 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:40.896 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:09:40.896 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:40.896 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:09:40.896 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57929_0 00:09:40.896 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:40.896 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57929_0 00:09:40.896 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:40.896 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57929_0 00:09:40.896 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:09:40.896 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57929_0 00:09:40.896 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:09:40.896 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:40.896 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:09:40.896 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:40.896 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:40.896 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57929 00:09:40.896 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:40.896 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57929 00:09:40.896 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:40.896 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57929 00:09:40.896 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:09:40.896 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:40.896 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:09:40.897 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:40.897 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:09:40.897 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:40.897 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:09:40.897 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:40.897 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:40.897 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57929 00:09:40.897 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:40.897 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57929 00:09:40.897 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:09:40.897 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57929 00:09:40.897 element at address: 0x200034afe940 with size: 1.000488 MiB 00:09:40.897 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57929 00:09:40.897 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:09:40.897 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57929 00:09:40.897 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:09:40.897 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57929 00:09:40.897 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:09:40.897 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:40.897 element at address: 0x20000707b780 with size: 0.500488 MiB 00:09:40.897 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:40.897 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:09:40.897 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:40.897 element at address: 0x200003a5ed00 with size: 0.125488 MiB 00:09:40.897 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57929 00:09:40.897 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:09:40.897 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:40.897 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:09:40.897 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:40.897 element at address: 0x200003a5aa40 with size: 0.016113 MiB 00:09:40.897 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57929 00:09:40.897 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:09:40.897 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:40.897 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:09:40.897 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57929 00:09:40.897 element at address: 0x200003affa00 with size: 0.000305 MiB 00:09:40.897 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57929 00:09:40.897 element at address: 0x200003a5a840 with size: 0.000305 MiB 00:09:40.897 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57929 00:09:40.897 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:09:40.897 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:40.897 11:20:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:40.897 11:20:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57929 00:09:40.897 11:20:36 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 57929 ']' 00:09:40.897 11:20:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 57929 00:09:40.897 11:20:36 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:09:40.897 11:20:36 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:40.897 11:20:36 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57929 00:09:40.897 killing process with pid 57929 00:09:40.897 11:20:36 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:40.897 11:20:36 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:40.897 11:20:36 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57929' 00:09:40.897 11:20:36 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 57929 00:09:40.897 11:20:36 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 57929 00:09:41.466 ************************************ 00:09:41.466 END TEST dpdk_mem_utility 00:09:41.466 ************************************ 00:09:41.466 00:09:41.466 real 0m1.942s 00:09:41.466 user 0m2.139s 00:09:41.466 sys 0m0.480s 00:09:41.466 11:20:36 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.466 11:20:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:41.466 11:20:36 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:41.466 11:20:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:41.466 11:20:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.466 11:20:36 -- common/autotest_common.sh@10 -- # set +x 00:09:41.466 ************************************ 00:09:41.466 START TEST event 00:09:41.466 ************************************ 00:09:41.466 11:20:36 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:41.466 * Looking for test storage... 00:09:41.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:41.466 11:20:36 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:41.466 11:20:36 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:41.466 11:20:36 event -- common/autotest_common.sh@1681 -- # lcov --version 00:09:41.466 11:20:36 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:41.466 11:20:36 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.466 11:20:36 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.467 11:20:36 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.467 11:20:36 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.467 11:20:36 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.467 11:20:36 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.467 11:20:36 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.467 11:20:36 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.467 11:20:36 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.467 11:20:36 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.467 11:20:36 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.467 11:20:36 event -- scripts/common.sh@344 -- # case "$op" in 00:09:41.467 11:20:36 event -- scripts/common.sh@345 -- # : 1 00:09:41.467 11:20:36 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.467 11:20:36 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.467 11:20:36 event -- scripts/common.sh@365 -- # decimal 1 00:09:41.467 11:20:36 event -- scripts/common.sh@353 -- # local d=1 00:09:41.467 11:20:36 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.467 11:20:36 event -- scripts/common.sh@355 -- # echo 1 00:09:41.467 11:20:36 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.467 11:20:36 event -- scripts/common.sh@366 -- # decimal 2 00:09:41.467 11:20:36 event -- scripts/common.sh@353 -- # local d=2 00:09:41.467 11:20:36 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.467 11:20:36 event -- scripts/common.sh@355 -- # echo 2 00:09:41.467 11:20:36 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.467 11:20:36 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.467 11:20:36 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.467 11:20:36 event -- scripts/common.sh@368 -- # return 0 00:09:41.467 11:20:36 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.467 11:20:36 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:41.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.467 --rc genhtml_branch_coverage=1 00:09:41.467 --rc genhtml_function_coverage=1 00:09:41.467 --rc genhtml_legend=1 00:09:41.467 --rc geninfo_all_blocks=1 00:09:41.467 --rc geninfo_unexecuted_blocks=1 00:09:41.467 00:09:41.467 ' 00:09:41.467 11:20:36 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:41.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.467 --rc genhtml_branch_coverage=1 00:09:41.467 --rc genhtml_function_coverage=1 00:09:41.467 --rc genhtml_legend=1 00:09:41.467 --rc geninfo_all_blocks=1 00:09:41.467 --rc geninfo_unexecuted_blocks=1 00:09:41.467 00:09:41.467 ' 00:09:41.467 11:20:36 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:41.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.467 --rc genhtml_branch_coverage=1 00:09:41.467 --rc genhtml_function_coverage=1 00:09:41.467 --rc genhtml_legend=1 00:09:41.467 --rc geninfo_all_blocks=1 00:09:41.467 --rc geninfo_unexecuted_blocks=1 00:09:41.467 00:09:41.467 ' 00:09:41.467 11:20:36 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:41.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.467 --rc genhtml_branch_coverage=1 00:09:41.467 --rc genhtml_function_coverage=1 00:09:41.467 --rc genhtml_legend=1 00:09:41.467 --rc geninfo_all_blocks=1 00:09:41.467 --rc geninfo_unexecuted_blocks=1 00:09:41.467 00:09:41.467 ' 00:09:41.467 11:20:36 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:41.467 11:20:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:41.467 11:20:36 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:41.467 11:20:36 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:41.467 11:20:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.467 11:20:36 event -- common/autotest_common.sh@10 -- # set +x 00:09:41.726 ************************************ 00:09:41.726 START TEST event_perf 00:09:41.726 ************************************ 00:09:41.726 11:20:36 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:41.726 Running I/O for 1 seconds...[2024-10-07 11:20:37.014213] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:09:41.726 [2024-10-07 11:20:37.014340] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58014 ] 00:09:41.726 [2024-10-07 11:20:37.152589] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.986 [2024-10-07 11:20:37.270695] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.986 [2024-10-07 11:20:37.270801] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.986 [2024-10-07 11:20:37.270937] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.986 [2024-10-07 11:20:37.270941] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.090 Running I/O for 1 seconds... 00:09:43.090 lcore 0: 192545 00:09:43.090 lcore 1: 192545 00:09:43.090 lcore 2: 192546 00:09:43.090 lcore 3: 192548 00:09:43.090 done. 00:09:43.090 00:09:43.090 real 0m1.367s 00:09:43.090 user 0m4.179s 00:09:43.090 sys 0m0.065s 00:09:43.090 11:20:38 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.090 11:20:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:43.090 ************************************ 00:09:43.090 END TEST event_perf 00:09:43.090 ************************************ 00:09:43.090 11:20:38 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:43.090 11:20:38 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:43.090 11:20:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.090 11:20:38 event -- common/autotest_common.sh@10 -- # set +x 00:09:43.090 ************************************ 00:09:43.090 START TEST event_reactor 00:09:43.090 ************************************ 00:09:43.090 11:20:38 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:43.090 [2024-10-07 11:20:38.436785] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:09:43.090 [2024-10-07 11:20:38.436891] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58053 ] 00:09:43.090 [2024-10-07 11:20:38.567206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.349 [2024-10-07 11:20:38.669475] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.286 test_start 00:09:44.286 oneshot 00:09:44.286 tick 100 00:09:44.286 tick 100 00:09:44.286 tick 250 00:09:44.286 tick 100 00:09:44.286 tick 100 00:09:44.286 tick 250 00:09:44.286 tick 100 00:09:44.287 tick 500 00:09:44.287 tick 100 00:09:44.287 tick 100 00:09:44.287 tick 250 00:09:44.287 tick 100 00:09:44.287 tick 100 00:09:44.287 test_end 00:09:44.287 00:09:44.287 real 0m1.340s 00:09:44.287 user 0m1.174s 00:09:44.287 sys 0m0.060s 00:09:44.287 ************************************ 00:09:44.287 END TEST event_reactor 00:09:44.287 ************************************ 00:09:44.287 11:20:39 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.287 11:20:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:44.287 11:20:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:44.287 11:20:39 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:44.287 11:20:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.287 11:20:39 event -- common/autotest_common.sh@10 -- # set +x 00:09:44.287 ************************************ 00:09:44.287 START TEST event_reactor_perf 00:09:44.287 ************************************ 00:09:44.287 11:20:39 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:44.546 [2024-10-07 11:20:39.827572] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:09:44.546 [2024-10-07 11:20:39.827664] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58083 ] 00:09:44.546 [2024-10-07 11:20:39.963826] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.804 [2024-10-07 11:20:40.084618] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.742 test_start 00:09:45.742 test_end 00:09:45.742 Performance: 386089 events per second 00:09:45.742 ************************************ 00:09:45.742 END TEST event_reactor_perf 00:09:45.742 ************************************ 00:09:45.742 00:09:45.742 real 0m1.372s 00:09:45.742 user 0m1.206s 00:09:45.742 sys 0m0.058s 00:09:45.742 11:20:41 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.742 11:20:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:45.742 11:20:41 event -- event/event.sh@49 -- # uname -s 00:09:45.742 11:20:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:45.742 11:20:41 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:45.742 11:20:41 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:45.742 11:20:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.742 11:20:41 event -- common/autotest_common.sh@10 -- # set +x 00:09:45.742 ************************************ 00:09:45.742 START TEST event_scheduler 00:09:45.742 ************************************ 00:09:45.742 11:20:41 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:46.002 * Looking for test storage... 00:09:46.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:46.002 11:20:41 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:46.002 11:20:41 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:09:46.002 11:20:41 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:46.002 11:20:41 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.002 11:20:41 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:46.002 11:20:41 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.002 11:20:41 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:46.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.002 --rc genhtml_branch_coverage=1 00:09:46.002 --rc genhtml_function_coverage=1 00:09:46.002 --rc genhtml_legend=1 00:09:46.002 --rc geninfo_all_blocks=1 00:09:46.002 --rc geninfo_unexecuted_blocks=1 00:09:46.002 00:09:46.002 ' 00:09:46.002 11:20:41 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:46.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.002 --rc genhtml_branch_coverage=1 00:09:46.002 --rc genhtml_function_coverage=1 00:09:46.002 --rc genhtml_legend=1 00:09:46.002 --rc geninfo_all_blocks=1 00:09:46.002 --rc geninfo_unexecuted_blocks=1 00:09:46.002 00:09:46.002 ' 00:09:46.002 11:20:41 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:46.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.002 --rc genhtml_branch_coverage=1 00:09:46.002 --rc genhtml_function_coverage=1 00:09:46.002 --rc genhtml_legend=1 00:09:46.002 --rc geninfo_all_blocks=1 00:09:46.002 --rc geninfo_unexecuted_blocks=1 00:09:46.002 00:09:46.002 ' 00:09:46.002 11:20:41 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:46.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.002 --rc genhtml_branch_coverage=1 00:09:46.002 --rc genhtml_function_coverage=1 00:09:46.002 --rc genhtml_legend=1 00:09:46.002 --rc geninfo_all_blocks=1 00:09:46.002 --rc geninfo_unexecuted_blocks=1 00:09:46.002 00:09:46.002 ' 00:09:46.002 11:20:41 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:46.002 11:20:41 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58158 00:09:46.002 11:20:41 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:46.002 11:20:41 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:46.002 11:20:41 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58158 00:09:46.002 11:20:41 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58158 ']' 00:09:46.002 11:20:41 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.002 11:20:41 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:46.002 11:20:41 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.002 11:20:41 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:46.002 11:20:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:46.002 [2024-10-07 11:20:41.485862] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:09:46.002 [2024-10-07 11:20:41.486467] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58158 ] 00:09:46.261 [2024-10-07 11:20:41.622615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.261 [2024-10-07 11:20:41.735169] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.261 [2024-10-07 11:20:41.735245] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.261 [2024-10-07 11:20:41.735361] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.261 [2024-10-07 11:20:41.735362] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.521 11:20:41 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.521 11:20:41 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:09:46.521 11:20:41 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:46.521 11:20:41 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.521 11:20:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:46.521 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:46.521 POWER: Cannot set governor of lcore 0 to userspace 00:09:46.521 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:46.521 POWER: Cannot set governor of lcore 0 to performance 00:09:46.521 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:46.521 POWER: Cannot set governor of lcore 0 to userspace 00:09:46.521 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:46.521 POWER: Cannot set governor of lcore 0 to userspace 00:09:46.521 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:09:46.521 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:46.521 POWER: Unable to set Power Management Environment for lcore 0 00:09:46.521 [2024-10-07 11:20:41.794813] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:09:46.521 [2024-10-07 11:20:41.794925] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:09:46.521 [2024-10-07 11:20:41.794967] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:46.521 [2024-10-07 11:20:41.795091] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:46.521 [2024-10-07 11:20:41.795131] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:46.521 [2024-10-07 11:20:41.795237] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:46.521 11:20:41 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.521 11:20:41 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:46.521 11:20:41 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.521 11:20:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:46.521 [2024-10-07 11:20:41.855387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:46.521 [2024-10-07 11:20:41.891819] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:46.521 11:20:41 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.521 11:20:41 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:46.521 11:20:41 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:46.521 11:20:41 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.521 11:20:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:46.521 ************************************ 00:09:46.521 START TEST scheduler_create_thread 00:09:46.521 ************************************ 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:46.521 2 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:46.521 3 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:46.521 4 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:46.521 5 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:46.521 6 00:09:46.521 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.522 11:20:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:46.522 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.522 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:46.522 7 00:09:46.522 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.522 11:20:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:46.522 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.522 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:46.522 8 00:09:46.522 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.522 11:20:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:46.522 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.522 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:46.522 9 00:09:46.522 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.522 11:20:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:46.522 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.522 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:46.522 10 00:09:46.522 11:20:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.522 11:20:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:46.522 11:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.522 11:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:46.522 11:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.522 11:20:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:46.522 11:20:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:46.522 11:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.522 11:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:47.459 11:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.459 11:20:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:47.459 11:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.459 11:20:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:48.879 11:20:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.879 11:20:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:48.879 11:20:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:48.879 11:20:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.879 11:20:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.815 ************************************ 00:09:49.815 END TEST scheduler_create_thread 00:09:49.815 ************************************ 00:09:49.815 11:20:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.815 00:09:49.815 real 0m3.377s 00:09:49.815 user 0m0.015s 00:09:49.815 sys 0m0.007s 00:09:49.815 11:20:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.815 11:20:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.815 11:20:45 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:49.815 11:20:45 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58158 00:09:49.815 11:20:45 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58158 ']' 00:09:49.815 11:20:45 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58158 00:09:49.815 11:20:45 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:09:49.815 11:20:45 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:49.815 11:20:45 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58158 00:09:50.073 11:20:45 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:09:50.073 11:20:45 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:09:50.073 killing process with pid 58158 00:09:50.073 11:20:45 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58158' 00:09:50.073 11:20:45 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58158 00:09:50.073 11:20:45 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58158 00:09:50.332 [2024-10-07 11:20:45.656858] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:50.591 00:09:50.591 real 0m4.713s 00:09:50.591 user 0m8.046s 00:09:50.591 sys 0m0.374s 00:09:50.591 11:20:45 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.591 ************************************ 00:09:50.591 END TEST event_scheduler 00:09:50.591 ************************************ 00:09:50.591 11:20:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:50.591 11:20:45 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:50.591 11:20:45 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:50.591 11:20:45 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:50.591 11:20:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.591 11:20:45 event -- common/autotest_common.sh@10 -- # set +x 00:09:50.592 ************************************ 00:09:50.592 START TEST app_repeat 00:09:50.592 ************************************ 00:09:50.592 11:20:46 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:09:50.592 11:20:46 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.592 11:20:46 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:50.592 11:20:46 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:50.592 11:20:46 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:50.592 11:20:46 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:50.592 11:20:46 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:50.592 11:20:46 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:50.592 Process app_repeat pid: 58250 00:09:50.592 spdk_app_start Round 0 00:09:50.592 11:20:46 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58250 00:09:50.592 11:20:46 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:50.592 11:20:46 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:50.592 11:20:46 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58250' 00:09:50.592 11:20:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:50.592 11:20:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:50.592 11:20:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58250 /var/tmp/spdk-nbd.sock 00:09:50.592 11:20:46 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58250 ']' 00:09:50.592 11:20:46 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:50.592 11:20:46 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:50.592 11:20:46 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:50.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:50.592 11:20:46 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:50.592 11:20:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:50.592 [2024-10-07 11:20:46.037358] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:09:50.592 [2024-10-07 11:20:46.038221] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58250 ] 00:09:50.850 [2024-10-07 11:20:46.170073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:50.850 [2024-10-07 11:20:46.293080] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.850 [2024-10-07 11:20:46.293091] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.850 [2024-10-07 11:20:46.351201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:51.786 11:20:47 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:51.786 11:20:47 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:51.786 11:20:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:52.045 Malloc0 00:09:52.045 11:20:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:52.303 Malloc1 00:09:52.303 11:20:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:52.303 11:20:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.303 11:20:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:52.303 11:20:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:52.303 11:20:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.303 11:20:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:52.303 11:20:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:52.303 11:20:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.303 11:20:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:52.303 11:20:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:52.303 11:20:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.303 11:20:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:52.303 11:20:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:52.303 11:20:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:52.303 11:20:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:52.303 11:20:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:52.562 /dev/nbd0 00:09:52.562 11:20:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:52.562 11:20:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:52.562 11:20:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:52.562 11:20:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:52.562 11:20:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:52.562 11:20:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:52.562 11:20:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:52.562 11:20:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:52.562 11:20:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:52.562 11:20:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:52.562 11:20:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:52.562 1+0 records in 00:09:52.562 1+0 records out 00:09:52.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335538 s, 12.2 MB/s 00:09:52.562 11:20:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:52.562 11:20:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:52.562 11:20:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:52.562 11:20:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:52.562 11:20:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:52.562 11:20:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.562 11:20:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:52.562 11:20:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:52.820 /dev/nbd1 00:09:52.820 11:20:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:52.820 11:20:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:52.820 11:20:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:52.820 11:20:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:52.820 11:20:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:52.820 11:20:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:52.820 11:20:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:52.820 11:20:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:52.820 11:20:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:52.820 11:20:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:52.820 11:20:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:52.820 1+0 records in 00:09:52.820 1+0 records out 00:09:52.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357545 s, 11.5 MB/s 00:09:52.820 11:20:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:52.820 11:20:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:52.820 11:20:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:52.820 11:20:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:52.820 11:20:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:52.820 11:20:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.820 11:20:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:52.820 11:20:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:52.820 11:20:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.820 11:20:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:53.077 11:20:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:53.077 { 00:09:53.077 "nbd_device": "/dev/nbd0", 00:09:53.077 "bdev_name": "Malloc0" 00:09:53.077 }, 00:09:53.077 { 00:09:53.077 "nbd_device": "/dev/nbd1", 00:09:53.077 "bdev_name": "Malloc1" 00:09:53.077 } 00:09:53.077 ]' 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:53.335 { 00:09:53.335 "nbd_device": "/dev/nbd0", 00:09:53.335 "bdev_name": "Malloc0" 00:09:53.335 }, 00:09:53.335 { 00:09:53.335 "nbd_device": "/dev/nbd1", 00:09:53.335 "bdev_name": "Malloc1" 00:09:53.335 } 00:09:53.335 ]' 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:53.335 /dev/nbd1' 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:53.335 /dev/nbd1' 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:53.335 256+0 records in 00:09:53.335 256+0 records out 00:09:53.335 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00662797 s, 158 MB/s 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:53.335 256+0 records in 00:09:53.335 256+0 records out 00:09:53.335 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275825 s, 38.0 MB/s 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:53.335 256+0 records in 00:09:53.335 256+0 records out 00:09:53.335 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287024 s, 36.5 MB/s 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:53.335 11:20:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:53.336 11:20:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:53.336 11:20:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:53.336 11:20:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:53.336 11:20:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:53.336 11:20:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:53.336 11:20:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:53.336 11:20:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:53.336 11:20:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:53.336 11:20:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:53.336 11:20:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:53.336 11:20:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.336 11:20:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:53.336 11:20:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:53.336 11:20:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:53.336 11:20:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.336 11:20:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:53.662 11:20:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:53.662 11:20:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:53.662 11:20:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:53.662 11:20:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.662 11:20:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.662 11:20:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:53.662 11:20:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:53.662 11:20:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.662 11:20:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.662 11:20:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:53.936 11:20:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:53.936 11:20:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:53.936 11:20:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:53.936 11:20:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.936 11:20:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.936 11:20:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:53.936 11:20:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:53.936 11:20:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.936 11:20:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:53.936 11:20:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.936 11:20:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:54.503 11:20:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:54.503 11:20:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:54.503 11:20:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:54.503 11:20:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:54.503 11:20:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:54.503 11:20:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:54.503 11:20:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:54.503 11:20:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:54.503 11:20:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:54.503 11:20:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:54.503 11:20:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:54.503 11:20:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:54.503 11:20:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:54.762 11:20:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:55.021 [2024-10-07 11:20:50.322057] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:55.021 [2024-10-07 11:20:50.436226] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.021 [2024-10-07 11:20:50.436235] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.021 [2024-10-07 11:20:50.492632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:55.021 [2024-10-07 11:20:50.492718] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:55.021 [2024-10-07 11:20:50.492733] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:58.306 11:20:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:58.306 spdk_app_start Round 1 00:09:58.306 11:20:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:58.306 11:20:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58250 /var/tmp/spdk-nbd.sock 00:09:58.306 11:20:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58250 ']' 00:09:58.306 11:20:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:58.306 11:20:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:58.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:58.306 11:20:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:58.306 11:20:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:58.306 11:20:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:58.306 11:20:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:58.306 11:20:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:58.306 11:20:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:58.306 Malloc0 00:09:58.306 11:20:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:58.564 Malloc1 00:09:58.822 11:20:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:58.822 11:20:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.822 11:20:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:58.822 11:20:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:58.822 11:20:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.822 11:20:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:58.822 11:20:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:58.822 11:20:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.822 11:20:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:58.822 11:20:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:58.822 11:20:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.822 11:20:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:58.822 11:20:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:58.822 11:20:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:58.822 11:20:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:58.822 11:20:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:58.822 /dev/nbd0 00:09:59.081 11:20:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:59.081 11:20:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:59.081 11:20:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:59.081 11:20:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:59.081 11:20:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:59.081 11:20:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:59.081 11:20:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:59.081 11:20:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:59.081 11:20:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:59.081 11:20:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:59.081 11:20:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:59.081 1+0 records in 00:09:59.081 1+0 records out 00:09:59.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398326 s, 10.3 MB/s 00:09:59.081 11:20:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:59.081 11:20:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:59.081 11:20:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:59.081 11:20:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:59.081 11:20:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:59.081 11:20:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:59.081 11:20:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:59.081 11:20:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:59.339 /dev/nbd1 00:09:59.339 11:20:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:59.339 11:20:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:59.339 11:20:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:59.339 11:20:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:59.339 11:20:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:59.339 11:20:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:59.339 11:20:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:59.339 11:20:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:59.339 11:20:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:59.339 11:20:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:59.339 11:20:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:59.339 1+0 records in 00:09:59.339 1+0 records out 00:09:59.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321414 s, 12.7 MB/s 00:09:59.339 11:20:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:59.339 11:20:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:59.339 11:20:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:59.339 11:20:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:59.339 11:20:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:59.339 11:20:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:59.339 11:20:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:59.339 11:20:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:59.339 11:20:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.339 11:20:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:59.598 11:20:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:59.598 { 00:09:59.598 "nbd_device": "/dev/nbd0", 00:09:59.599 "bdev_name": "Malloc0" 00:09:59.599 }, 00:09:59.599 { 00:09:59.599 "nbd_device": "/dev/nbd1", 00:09:59.599 "bdev_name": "Malloc1" 00:09:59.599 } 00:09:59.599 ]' 00:09:59.599 11:20:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:59.599 { 00:09:59.599 "nbd_device": "/dev/nbd0", 00:09:59.599 "bdev_name": "Malloc0" 00:09:59.599 }, 00:09:59.599 { 00:09:59.599 "nbd_device": "/dev/nbd1", 00:09:59.599 "bdev_name": "Malloc1" 00:09:59.599 } 00:09:59.599 ]' 00:09:59.599 11:20:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:59.599 11:20:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:59.599 /dev/nbd1' 00:09:59.599 11:20:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:59.599 /dev/nbd1' 00:09:59.599 11:20:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:59.599 11:20:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:59.599 11:20:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:59.599 11:20:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:59.599 11:20:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:59.599 11:20:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:59.599 11:20:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:59.599 11:20:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:59.599 11:20:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:59.599 11:20:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:59.599 11:20:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:59.599 11:20:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:59.599 256+0 records in 00:09:59.599 256+0 records out 00:09:59.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104826 s, 100 MB/s 00:09:59.599 11:20:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:59.599 11:20:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:59.857 256+0 records in 00:09:59.857 256+0 records out 00:09:59.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023847 s, 44.0 MB/s 00:09:59.857 11:20:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:59.857 11:20:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:59.857 256+0 records in 00:09:59.857 256+0 records out 00:09:59.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260938 s, 40.2 MB/s 00:09:59.857 11:20:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:59.857 11:20:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:59.857 11:20:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:59.857 11:20:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:59.857 11:20:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:59.857 11:20:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:59.857 11:20:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:59.857 11:20:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:59.857 11:20:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:59.857 11:20:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:59.857 11:20:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:59.857 11:20:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:59.857 11:20:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:59.857 11:20:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.858 11:20:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:59.858 11:20:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:59.858 11:20:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:59.858 11:20:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:59.858 11:20:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:00.116 11:20:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:00.116 11:20:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:00.116 11:20:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:00.116 11:20:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:00.116 11:20:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:00.116 11:20:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:00.116 11:20:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:00.116 11:20:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:00.116 11:20:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:00.116 11:20:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:00.374 11:20:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:00.374 11:20:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:00.374 11:20:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:00.374 11:20:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:00.374 11:20:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:00.374 11:20:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:00.374 11:20:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:00.374 11:20:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:00.374 11:20:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:00.374 11:20:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:00.374 11:20:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:00.632 11:20:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:00.632 11:20:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:00.632 11:20:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:00.890 11:20:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:00.890 11:20:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:00.890 11:20:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:00.890 11:20:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:00.890 11:20:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:00.890 11:20:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:00.890 11:20:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:00.890 11:20:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:00.890 11:20:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:00.890 11:20:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:01.147 11:20:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:01.405 [2024-10-07 11:20:56.766419] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:01.405 [2024-10-07 11:20:56.874103] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.405 [2024-10-07 11:20:56.874114] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.686 [2024-10-07 11:20:56.931565] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.686 [2024-10-07 11:20:56.931669] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:01.686 [2024-10-07 11:20:56.931684] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:04.256 11:20:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:04.256 spdk_app_start Round 2 00:10:04.256 11:20:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:04.256 11:20:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58250 /var/tmp/spdk-nbd.sock 00:10:04.256 11:20:59 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58250 ']' 00:10:04.256 11:20:59 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:04.256 11:20:59 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:04.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:04.256 11:20:59 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:04.256 11:20:59 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:04.256 11:20:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:04.514 11:20:59 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.514 11:20:59 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:10:04.514 11:20:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:04.774 Malloc0 00:10:04.774 11:21:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:05.033 Malloc1 00:10:05.033 11:21:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:05.033 11:21:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:05.033 11:21:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:05.033 11:21:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:05.033 11:21:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:05.033 11:21:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:05.033 11:21:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:05.033 11:21:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:05.033 11:21:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:05.033 11:21:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:05.033 11:21:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:05.033 11:21:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:05.033 11:21:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:05.033 11:21:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:05.033 11:21:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:05.033 11:21:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:05.292 /dev/nbd0 00:10:05.292 11:21:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:05.292 11:21:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:05.292 11:21:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:05.292 11:21:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:05.292 11:21:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:05.292 11:21:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:05.292 11:21:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:05.292 11:21:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:05.292 11:21:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:05.292 11:21:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:05.292 11:21:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:05.292 1+0 records in 00:10:05.292 1+0 records out 00:10:05.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293421 s, 14.0 MB/s 00:10:05.550 11:21:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:05.550 11:21:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:05.550 11:21:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:05.550 11:21:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:05.550 11:21:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:05.550 11:21:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:05.550 11:21:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:05.550 11:21:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:05.808 /dev/nbd1 00:10:05.808 11:21:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:05.808 11:21:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:05.808 11:21:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:05.808 11:21:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:05.808 11:21:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:05.808 11:21:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:05.808 11:21:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:05.808 11:21:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:05.808 11:21:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:05.808 11:21:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:05.808 11:21:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:05.808 1+0 records in 00:10:05.808 1+0 records out 00:10:05.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329093 s, 12.4 MB/s 00:10:05.808 11:21:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:05.808 11:21:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:05.808 11:21:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:05.808 11:21:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:05.808 11:21:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:05.808 11:21:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:05.808 11:21:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:05.808 11:21:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:05.808 11:21:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:05.808 11:21:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:06.067 { 00:10:06.067 "nbd_device": "/dev/nbd0", 00:10:06.067 "bdev_name": "Malloc0" 00:10:06.067 }, 00:10:06.067 { 00:10:06.067 "nbd_device": "/dev/nbd1", 00:10:06.067 "bdev_name": "Malloc1" 00:10:06.067 } 00:10:06.067 ]' 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:06.067 { 00:10:06.067 "nbd_device": "/dev/nbd0", 00:10:06.067 "bdev_name": "Malloc0" 00:10:06.067 }, 00:10:06.067 { 00:10:06.067 "nbd_device": "/dev/nbd1", 00:10:06.067 "bdev_name": "Malloc1" 00:10:06.067 } 00:10:06.067 ]' 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:06.067 /dev/nbd1' 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:06.067 /dev/nbd1' 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:06.067 256+0 records in 00:10:06.067 256+0 records out 00:10:06.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00626492 s, 167 MB/s 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:06.067 256+0 records in 00:10:06.067 256+0 records out 00:10:06.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250657 s, 41.8 MB/s 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:06.067 256+0 records in 00:10:06.067 256+0 records out 00:10:06.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251773 s, 41.6 MB/s 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:06.067 11:21:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:06.326 11:21:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:06.326 11:21:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:06.326 11:21:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:06.326 11:21:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:06.326 11:21:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:06.326 11:21:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:06.326 11:21:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:06.326 11:21:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:06.585 11:21:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:06.585 11:21:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:06.585 11:21:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:06.585 11:21:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:06.585 11:21:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:06.585 11:21:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:06.585 11:21:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:06.585 11:21:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:06.585 11:21:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:06.585 11:21:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:06.843 11:21:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:06.843 11:21:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:06.843 11:21:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:06.843 11:21:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:06.843 11:21:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:06.843 11:21:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:06.843 11:21:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:06.843 11:21:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:06.843 11:21:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:06.843 11:21:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:06.843 11:21:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:07.102 11:21:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:07.102 11:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:07.102 11:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:07.102 11:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:07.102 11:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:07.102 11:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:07.102 11:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:07.102 11:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:07.102 11:21:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:07.102 11:21:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:07.102 11:21:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:07.102 11:21:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:07.102 11:21:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:07.360 11:21:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:07.619 [2024-10-07 11:21:03.044204] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:07.877 [2024-10-07 11:21:03.157664] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.878 [2024-10-07 11:21:03.157659] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.878 [2024-10-07 11:21:03.213484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:07.878 [2024-10-07 11:21:03.213575] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:07.878 [2024-10-07 11:21:03.213590] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:10.456 11:21:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58250 /var/tmp/spdk-nbd.sock 00:10:10.456 11:21:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58250 ']' 00:10:10.456 11:21:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:10.456 11:21:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:10.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:10.456 11:21:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:10.456 11:21:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:10.456 11:21:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:10.713 11:21:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:10.713 11:21:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:10:10.713 11:21:06 event.app_repeat -- event/event.sh@39 -- # killprocess 58250 00:10:10.713 11:21:06 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58250 ']' 00:10:10.713 11:21:06 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58250 00:10:10.713 11:21:06 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:10:10.713 11:21:06 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.713 11:21:06 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58250 00:10:10.713 11:21:06 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.713 11:21:06 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.713 11:21:06 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58250' 00:10:10.713 killing process with pid 58250 00:10:10.713 11:21:06 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58250 00:10:10.713 11:21:06 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58250 00:10:10.998 spdk_app_start is called in Round 0. 00:10:10.998 Shutdown signal received, stop current app iteration 00:10:10.998 Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 reinitialization... 00:10:10.998 spdk_app_start is called in Round 1. 00:10:10.998 Shutdown signal received, stop current app iteration 00:10:10.998 Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 reinitialization... 00:10:10.998 spdk_app_start is called in Round 2. 00:10:10.998 Shutdown signal received, stop current app iteration 00:10:10.998 Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 reinitialization... 00:10:10.998 spdk_app_start is called in Round 3. 00:10:10.998 Shutdown signal received, stop current app iteration 00:10:10.998 11:21:06 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:10.998 11:21:06 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:10.998 00:10:10.998 real 0m20.363s 00:10:10.998 user 0m46.316s 00:10:10.998 sys 0m3.091s 00:10:10.998 11:21:06 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.998 11:21:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:10.998 ************************************ 00:10:10.998 END TEST app_repeat 00:10:10.998 ************************************ 00:10:10.998 11:21:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:10.998 11:21:06 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:10.998 11:21:06 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:10.998 11:21:06 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.998 11:21:06 event -- common/autotest_common.sh@10 -- # set +x 00:10:10.998 ************************************ 00:10:10.998 START TEST cpu_locks 00:10:10.998 ************************************ 00:10:10.998 11:21:06 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:10.998 * Looking for test storage... 00:10:10.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:10.998 11:21:06 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:10.998 11:21:06 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:10:10.998 11:21:06 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:11.257 11:21:06 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.257 11:21:06 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:10:11.257 11:21:06 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.257 11:21:06 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:11.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.257 --rc genhtml_branch_coverage=1 00:10:11.257 --rc genhtml_function_coverage=1 00:10:11.257 --rc genhtml_legend=1 00:10:11.257 --rc geninfo_all_blocks=1 00:10:11.257 --rc geninfo_unexecuted_blocks=1 00:10:11.257 00:10:11.257 ' 00:10:11.257 11:21:06 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:11.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.257 --rc genhtml_branch_coverage=1 00:10:11.257 --rc genhtml_function_coverage=1 00:10:11.257 --rc genhtml_legend=1 00:10:11.257 --rc geninfo_all_blocks=1 00:10:11.257 --rc geninfo_unexecuted_blocks=1 00:10:11.257 00:10:11.257 ' 00:10:11.257 11:21:06 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:11.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.257 --rc genhtml_branch_coverage=1 00:10:11.257 --rc genhtml_function_coverage=1 00:10:11.257 --rc genhtml_legend=1 00:10:11.257 --rc geninfo_all_blocks=1 00:10:11.257 --rc geninfo_unexecuted_blocks=1 00:10:11.257 00:10:11.257 ' 00:10:11.257 11:21:06 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:11.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.257 --rc genhtml_branch_coverage=1 00:10:11.257 --rc genhtml_function_coverage=1 00:10:11.257 --rc genhtml_legend=1 00:10:11.257 --rc geninfo_all_blocks=1 00:10:11.257 --rc geninfo_unexecuted_blocks=1 00:10:11.258 00:10:11.258 ' 00:10:11.258 11:21:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:11.258 11:21:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:11.258 11:21:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:11.258 11:21:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:11.258 11:21:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:11.258 11:21:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.258 11:21:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:11.258 ************************************ 00:10:11.258 START TEST default_locks 00:10:11.258 ************************************ 00:10:11.258 11:21:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:10:11.258 11:21:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58707 00:10:11.258 11:21:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58707 00:10:11.258 11:21:06 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58707 ']' 00:10:11.258 11:21:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.258 11:21:06 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:11.258 11:21:06 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.258 11:21:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:11.258 11:21:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:11.258 11:21:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:11.258 [2024-10-07 11:21:06.780576] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:11.258 [2024-10-07 11:21:06.780724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58707 ] 00:10:11.518 [2024-10-07 11:21:06.920641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.776 [2024-10-07 11:21:07.042335] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.776 [2024-10-07 11:21:07.123787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:12.361 11:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:12.361 11:21:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:10:12.361 11:21:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58707 00:10:12.361 11:21:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58707 00:10:12.361 11:21:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:12.619 11:21:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58707 00:10:12.619 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58707 ']' 00:10:12.619 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58707 00:10:12.619 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:10:12.619 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:12.619 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58707 00:10:12.619 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:12.619 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:12.619 killing process with pid 58707 00:10:12.619 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58707' 00:10:12.619 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58707 00:10:12.619 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58707 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58707 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58707 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58707 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58707 ']' 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:13.185 ERROR: process (pid: 58707) is no longer running 00:10:13.185 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58707) - No such process 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:13.185 00:10:13.185 real 0m1.940s 00:10:13.185 user 0m2.091s 00:10:13.185 sys 0m0.593s 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.185 11:21:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:13.185 ************************************ 00:10:13.185 END TEST default_locks 00:10:13.185 ************************************ 00:10:13.185 11:21:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:13.185 11:21:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:13.185 11:21:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.185 11:21:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:13.185 ************************************ 00:10:13.185 START TEST default_locks_via_rpc 00:10:13.185 ************************************ 00:10:13.185 11:21:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:10:13.185 11:21:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58759 00:10:13.185 11:21:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58759 00:10:13.185 11:21:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:13.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.185 11:21:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58759 ']' 00:10:13.185 11:21:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.185 11:21:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.185 11:21:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.185 11:21:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.185 11:21:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.185 [2024-10-07 11:21:08.671132] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:13.185 [2024-10-07 11:21:08.671260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58759 ] 00:10:13.443 [2024-10-07 11:21:08.805299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.443 [2024-10-07 11:21:08.935486] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.701 [2024-10-07 11:21:09.014108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:14.267 11:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.267 11:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:14.267 11:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:14.267 11:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.267 11:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.267 11:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.267 11:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:14.267 11:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:14.267 11:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:14.267 11:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:14.267 11:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:14.267 11:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.267 11:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.267 11:21:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.267 11:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58759 00:10:14.267 11:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58759 00:10:14.267 11:21:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:14.833 11:21:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58759 00:10:14.833 11:21:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58759 ']' 00:10:14.833 11:21:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58759 00:10:14.833 11:21:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:10:14.833 11:21:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:14.833 11:21:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58759 00:10:14.833 killing process with pid 58759 00:10:14.833 11:21:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:14.833 11:21:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:14.833 11:21:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58759' 00:10:14.833 11:21:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58759 00:10:14.833 11:21:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58759 00:10:15.092 ************************************ 00:10:15.092 END TEST default_locks_via_rpc 00:10:15.092 ************************************ 00:10:15.092 00:10:15.092 real 0m1.950s 00:10:15.092 user 0m2.099s 00:10:15.092 sys 0m0.585s 00:10:15.092 11:21:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.092 11:21:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.092 11:21:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:15.092 11:21:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:15.092 11:21:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.092 11:21:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:15.092 ************************************ 00:10:15.092 START TEST non_locking_app_on_locked_coremask 00:10:15.092 ************************************ 00:10:15.092 11:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:10:15.351 11:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58810 00:10:15.351 11:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:15.351 11:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58810 /var/tmp/spdk.sock 00:10:15.351 11:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58810 ']' 00:10:15.351 11:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.351 11:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.351 11:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.351 11:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.351 11:21:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:15.351 [2024-10-07 11:21:10.680514] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:15.351 [2024-10-07 11:21:10.680619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58810 ] 00:10:15.351 [2024-10-07 11:21:10.820753] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.610 [2024-10-07 11:21:10.941251] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.610 [2024-10-07 11:21:11.018137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:16.545 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.545 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:16.545 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58826 00:10:16.545 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:16.545 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58826 /var/tmp/spdk2.sock 00:10:16.545 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58826 ']' 00:10:16.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:16.545 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:16.545 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.546 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:16.546 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.546 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:16.546 [2024-10-07 11:21:11.776941] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:16.546 [2024-10-07 11:21:11.777346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58826 ] 00:10:16.546 [2024-10-07 11:21:11.924322] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:16.546 [2024-10-07 11:21:11.924409] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.805 [2024-10-07 11:21:12.165668] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.805 [2024-10-07 11:21:12.317382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:17.380 11:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.380 11:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:17.380 11:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58810 00:10:17.380 11:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58810 00:10:17.380 11:21:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:18.319 11:21:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58810 00:10:18.319 11:21:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58810 ']' 00:10:18.319 11:21:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58810 00:10:18.319 11:21:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:18.319 11:21:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:18.319 11:21:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58810 00:10:18.319 killing process with pid 58810 00:10:18.319 11:21:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:18.319 11:21:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:18.319 11:21:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58810' 00:10:18.319 11:21:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58810 00:10:18.319 11:21:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58810 00:10:18.886 11:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58826 00:10:18.886 11:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58826 ']' 00:10:18.886 11:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58826 00:10:18.886 11:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:18.886 11:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:18.886 11:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58826 00:10:19.144 killing process with pid 58826 00:10:19.144 11:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:19.144 11:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:19.144 11:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58826' 00:10:19.144 11:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58826 00:10:19.144 11:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58826 00:10:19.402 ************************************ 00:10:19.402 END TEST non_locking_app_on_locked_coremask 00:10:19.402 00:10:19.402 real 0m4.236s 00:10:19.402 user 0m4.765s 00:10:19.402 sys 0m1.128s 00:10:19.402 11:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.402 11:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:19.402 ************************************ 00:10:19.402 11:21:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:19.402 11:21:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:19.402 11:21:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.402 11:21:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:19.402 ************************************ 00:10:19.402 START TEST locking_app_on_unlocked_coremask 00:10:19.402 ************************************ 00:10:19.402 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:10:19.402 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58893 00:10:19.402 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:19.402 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58893 /var/tmp/spdk.sock 00:10:19.402 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58893 ']' 00:10:19.402 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.402 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.402 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.402 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.402 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:19.660 [2024-10-07 11:21:14.978626] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:19.660 [2024-10-07 11:21:14.978732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58893 ] 00:10:19.660 [2024-10-07 11:21:15.113804] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:19.660 [2024-10-07 11:21:15.113857] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.919 [2024-10-07 11:21:15.232851] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.919 [2024-10-07 11:21:15.309942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:20.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:20.485 11:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.485 11:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:20.485 11:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58915 00:10:20.485 11:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:20.485 11:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58915 /var/tmp/spdk2.sock 00:10:20.485 11:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58915 ']' 00:10:20.485 11:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:20.485 11:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.485 11:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:20.485 11:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.485 11:21:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:20.743 [2024-10-07 11:21:16.087309] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:20.743 [2024-10-07 11:21:16.087885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58915 ] 00:10:20.743 [2024-10-07 11:21:16.230312] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.001 [2024-10-07 11:21:16.472252] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.260 [2024-10-07 11:21:16.631613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:21.827 11:21:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.827 11:21:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:21.827 11:21:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58915 00:10:21.827 11:21:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58915 00:10:21.827 11:21:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:22.762 11:21:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58893 00:10:22.762 11:21:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58893 ']' 00:10:22.762 11:21:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 58893 00:10:22.763 11:21:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:22.763 11:21:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.763 11:21:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58893 00:10:22.763 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:22.763 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:22.763 killing process with pid 58893 00:10:22.763 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58893' 00:10:22.763 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 58893 00:10:22.763 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 58893 00:10:23.326 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58915 00:10:23.326 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58915 ']' 00:10:23.326 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 58915 00:10:23.326 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:23.584 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:23.584 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58915 00:10:23.584 killing process with pid 58915 00:10:23.584 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:23.584 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:23.584 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58915' 00:10:23.584 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 58915 00:10:23.584 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 58915 00:10:23.842 ************************************ 00:10:23.842 END TEST locking_app_on_unlocked_coremask 00:10:23.842 ************************************ 00:10:23.842 00:10:23.842 real 0m4.410s 00:10:23.842 user 0m4.987s 00:10:23.842 sys 0m1.221s 00:10:23.842 11:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.842 11:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:23.842 11:21:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:23.842 11:21:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:23.842 11:21:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.842 11:21:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:23.842 ************************************ 00:10:24.100 START TEST locking_app_on_locked_coremask 00:10:24.100 ************************************ 00:10:24.100 11:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:10:24.100 11:21:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58982 00:10:24.100 11:21:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58982 /var/tmp/spdk.sock 00:10:24.100 11:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58982 ']' 00:10:24.100 11:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.100 11:21:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:24.100 11:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.100 11:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.100 11:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.100 11:21:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:24.100 [2024-10-07 11:21:19.435608] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:24.100 [2024-10-07 11:21:19.435734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58982 ] 00:10:24.100 [2024-10-07 11:21:19.576085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.358 [2024-10-07 11:21:19.706812] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.358 [2024-10-07 11:21:19.785736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:25.294 11:21:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.294 11:21:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:25.294 11:21:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:25.294 11:21:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58998 00:10:25.294 11:21:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58998 /var/tmp/spdk2.sock 00:10:25.294 11:21:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:10:25.294 11:21:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58998 /var/tmp/spdk2.sock 00:10:25.294 11:21:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:25.294 11:21:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:25.294 11:21:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:25.294 11:21:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:25.294 11:21:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58998 /var/tmp/spdk2.sock 00:10:25.294 11:21:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58998 ']' 00:10:25.294 11:21:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:25.294 11:21:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.294 11:21:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:25.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:25.294 11:21:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.294 11:21:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:25.294 [2024-10-07 11:21:20.548940] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:25.294 [2024-10-07 11:21:20.549218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58998 ] 00:10:25.294 [2024-10-07 11:21:20.689220] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58982 has claimed it. 00:10:25.294 [2024-10-07 11:21:20.689284] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:25.862 ERROR: process (pid: 58998) is no longer running 00:10:25.862 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58998) - No such process 00:10:25.862 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.862 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:10:25.862 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:10:25.862 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:25.862 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:25.862 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:25.862 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58982 00:10:25.862 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:25.862 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58982 00:10:26.430 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58982 00:10:26.430 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58982 ']' 00:10:26.430 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58982 00:10:26.430 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:26.430 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:26.430 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58982 00:10:26.430 killing process with pid 58982 00:10:26.430 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:26.430 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:26.430 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58982' 00:10:26.430 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58982 00:10:26.430 11:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58982 00:10:26.997 00:10:26.997 real 0m2.922s 00:10:26.997 user 0m3.457s 00:10:26.997 sys 0m0.709s 00:10:26.997 11:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.998 ************************************ 00:10:26.998 END TEST locking_app_on_locked_coremask 00:10:26.998 ************************************ 00:10:26.998 11:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:26.998 11:21:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:26.998 11:21:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:26.998 11:21:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.998 11:21:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:26.998 ************************************ 00:10:26.998 START TEST locking_overlapped_coremask 00:10:26.998 ************************************ 00:10:26.998 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:10:26.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.998 11:21:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59049 00:10:26.998 11:21:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:26.998 11:21:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59049 /var/tmp/spdk.sock 00:10:26.998 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59049 ']' 00:10:26.998 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.998 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:26.998 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.998 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:26.998 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:26.998 [2024-10-07 11:21:22.402652] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:26.998 [2024-10-07 11:21:22.402739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59049 ] 00:10:27.256 [2024-10-07 11:21:22.536694] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:27.256 [2024-10-07 11:21:22.656445] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.256 [2024-10-07 11:21:22.656599] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.256 [2024-10-07 11:21:22.656605] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.256 [2024-10-07 11:21:22.730375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:27.514 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:27.514 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:27.515 11:21:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:27.515 11:21:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59059 00:10:27.515 11:21:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59059 /var/tmp/spdk2.sock 00:10:27.515 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:10:27.515 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59059 /var/tmp/spdk2.sock 00:10:27.515 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:27.515 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:27.515 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:27.515 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:27.515 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59059 /var/tmp/spdk2.sock 00:10:27.515 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59059 ']' 00:10:27.515 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:27.515 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:27.515 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:27.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:27.515 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:27.515 11:21:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:27.515 [2024-10-07 11:21:22.981727] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:27.515 [2024-10-07 11:21:22.981817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59059 ] 00:10:27.773 [2024-10-07 11:21:23.122588] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59049 has claimed it. 00:10:27.773 [2024-10-07 11:21:23.122664] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:28.353 ERROR: process (pid: 59059) is no longer running 00:10:28.353 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59059) - No such process 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59049 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59049 ']' 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59049 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59049 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59049' 00:10:28.353 killing process with pid 59049 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59049 00:10:28.353 11:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59049 00:10:28.921 00:10:28.921 real 0m1.858s 00:10:28.921 user 0m4.943s 00:10:28.921 sys 0m0.407s 00:10:28.921 11:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:28.921 11:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:28.921 ************************************ 00:10:28.921 END TEST locking_overlapped_coremask 00:10:28.921 ************************************ 00:10:28.921 11:21:24 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:28.921 11:21:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:28.921 11:21:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:28.921 11:21:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:28.921 ************************************ 00:10:28.921 START TEST locking_overlapped_coremask_via_rpc 00:10:28.921 ************************************ 00:10:28.921 11:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:10:28.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.921 11:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59105 00:10:28.921 11:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:28.921 11:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59105 /var/tmp/spdk.sock 00:10:28.921 11:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59105 ']' 00:10:28.921 11:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.921 11:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:28.921 11:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.921 11:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:28.921 11:21:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.921 [2024-10-07 11:21:24.315873] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:28.921 [2024-10-07 11:21:24.315970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59105 ] 00:10:29.179 [2024-10-07 11:21:24.449650] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:29.179 [2024-10-07 11:21:24.449694] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:29.179 [2024-10-07 11:21:24.566637] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.179 [2024-10-07 11:21:24.566755] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.179 [2024-10-07 11:21:24.566761] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.179 [2024-10-07 11:21:24.641348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:30.127 11:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:30.127 11:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:30.127 11:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:30.127 11:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59123 00:10:30.127 11:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59123 /var/tmp/spdk2.sock 00:10:30.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:30.127 11:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59123 ']' 00:10:30.127 11:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:30.127 11:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.127 11:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:30.127 11:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.127 11:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.127 [2024-10-07 11:21:25.416051] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:30.127 [2024-10-07 11:21:25.416140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59123 ] 00:10:30.127 [2024-10-07 11:21:25.559542] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:30.127 [2024-10-07 11:21:25.559598] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:30.385 [2024-10-07 11:21:25.791892] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.385 [2024-10-07 11:21:25.791979] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:10:30.385 [2024-10-07 11:21:25.791980] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.644 [2024-10-07 11:21:25.938929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:31.210 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.211 [2024-10-07 11:21:26.519466] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59105 has claimed it. 00:10:31.211 request: 00:10:31.211 { 00:10:31.211 "method": "framework_enable_cpumask_locks", 00:10:31.211 "req_id": 1 00:10:31.211 } 00:10:31.211 Got JSON-RPC error response 00:10:31.211 response: 00:10:31.211 { 00:10:31.211 "code": -32603, 00:10:31.211 "message": "Failed to claim CPU core: 2" 00:10:31.211 } 00:10:31.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59105 /var/tmp/spdk.sock 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59105 ']' 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:31.211 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.469 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.469 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:31.469 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59123 /var/tmp/spdk2.sock 00:10:31.469 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59123 ']' 00:10:31.469 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:31.469 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:31.469 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:31.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:31.469 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:31.469 11:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.728 11:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.728 11:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:31.728 11:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:31.728 11:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:31.728 11:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:31.728 11:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:31.728 00:10:31.728 real 0m2.879s 00:10:31.728 user 0m1.599s 00:10:31.728 sys 0m0.212s 00:10:31.728 11:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.728 11:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.728 ************************************ 00:10:31.728 END TEST locking_overlapped_coremask_via_rpc 00:10:31.728 ************************************ 00:10:31.728 11:21:27 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:31.728 11:21:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59105 ]] 00:10:31.728 11:21:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59105 00:10:31.728 11:21:27 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59105 ']' 00:10:31.728 11:21:27 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59105 00:10:31.728 11:21:27 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:10:31.728 11:21:27 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:31.728 11:21:27 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59105 00:10:31.728 killing process with pid 59105 00:10:31.728 11:21:27 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:31.728 11:21:27 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:31.728 11:21:27 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59105' 00:10:31.728 11:21:27 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59105 00:10:31.728 11:21:27 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59105 00:10:32.294 11:21:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59123 ]] 00:10:32.294 11:21:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59123 00:10:32.294 11:21:27 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59123 ']' 00:10:32.294 11:21:27 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59123 00:10:32.294 11:21:27 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:10:32.294 11:21:27 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:32.294 11:21:27 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59123 00:10:32.294 killing process with pid 59123 00:10:32.294 11:21:27 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:10:32.294 11:21:27 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:10:32.294 11:21:27 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59123' 00:10:32.294 11:21:27 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59123 00:10:32.294 11:21:27 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59123 00:10:32.551 11:21:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:32.552 11:21:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:32.552 11:21:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59105 ]] 00:10:32.552 11:21:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59105 00:10:32.552 Process with pid 59105 is not found 00:10:32.552 Process with pid 59123 is not found 00:10:32.552 11:21:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59105 ']' 00:10:32.552 11:21:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59105 00:10:32.552 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59105) - No such process 00:10:32.552 11:21:28 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59105 is not found' 00:10:32.552 11:21:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59123 ]] 00:10:32.552 11:21:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59123 00:10:32.552 11:21:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59123 ']' 00:10:32.552 11:21:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59123 00:10:32.552 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59123) - No such process 00:10:32.552 11:21:28 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59123 is not found' 00:10:32.552 11:21:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:32.552 ************************************ 00:10:32.552 END TEST cpu_locks 00:10:32.552 ************************************ 00:10:32.552 00:10:32.552 real 0m21.649s 00:10:32.552 user 0m37.610s 00:10:32.552 sys 0m5.752s 00:10:32.552 11:21:28 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:32.552 11:21:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:32.810 ************************************ 00:10:32.810 END TEST event 00:10:32.810 ************************************ 00:10:32.810 00:10:32.810 real 0m51.321s 00:10:32.810 user 1m38.749s 00:10:32.810 sys 0m9.676s 00:10:32.810 11:21:28 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:32.810 11:21:28 event -- common/autotest_common.sh@10 -- # set +x 00:10:32.810 11:21:28 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:32.810 11:21:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:32.810 11:21:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:32.810 11:21:28 -- common/autotest_common.sh@10 -- # set +x 00:10:32.810 ************************************ 00:10:32.810 START TEST thread 00:10:32.810 ************************************ 00:10:32.810 11:21:28 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:32.810 * Looking for test storage... 00:10:32.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:32.810 11:21:28 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:32.810 11:21:28 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:10:32.810 11:21:28 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:33.067 11:21:28 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:33.067 11:21:28 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.067 11:21:28 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.067 11:21:28 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.067 11:21:28 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.067 11:21:28 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.067 11:21:28 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.067 11:21:28 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.067 11:21:28 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.067 11:21:28 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.067 11:21:28 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.067 11:21:28 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.067 11:21:28 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:33.067 11:21:28 thread -- scripts/common.sh@345 -- # : 1 00:10:33.067 11:21:28 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.067 11:21:28 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.067 11:21:28 thread -- scripts/common.sh@365 -- # decimal 1 00:10:33.067 11:21:28 thread -- scripts/common.sh@353 -- # local d=1 00:10:33.067 11:21:28 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.067 11:21:28 thread -- scripts/common.sh@355 -- # echo 1 00:10:33.067 11:21:28 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.067 11:21:28 thread -- scripts/common.sh@366 -- # decimal 2 00:10:33.067 11:21:28 thread -- scripts/common.sh@353 -- # local d=2 00:10:33.067 11:21:28 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.067 11:21:28 thread -- scripts/common.sh@355 -- # echo 2 00:10:33.067 11:21:28 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.067 11:21:28 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.067 11:21:28 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.067 11:21:28 thread -- scripts/common.sh@368 -- # return 0 00:10:33.067 11:21:28 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.067 11:21:28 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:33.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.067 --rc genhtml_branch_coverage=1 00:10:33.067 --rc genhtml_function_coverage=1 00:10:33.067 --rc genhtml_legend=1 00:10:33.067 --rc geninfo_all_blocks=1 00:10:33.067 --rc geninfo_unexecuted_blocks=1 00:10:33.067 00:10:33.067 ' 00:10:33.067 11:21:28 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:33.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.067 --rc genhtml_branch_coverage=1 00:10:33.067 --rc genhtml_function_coverage=1 00:10:33.067 --rc genhtml_legend=1 00:10:33.067 --rc geninfo_all_blocks=1 00:10:33.067 --rc geninfo_unexecuted_blocks=1 00:10:33.067 00:10:33.067 ' 00:10:33.067 11:21:28 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:33.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.067 --rc genhtml_branch_coverage=1 00:10:33.067 --rc genhtml_function_coverage=1 00:10:33.067 --rc genhtml_legend=1 00:10:33.067 --rc geninfo_all_blocks=1 00:10:33.067 --rc geninfo_unexecuted_blocks=1 00:10:33.067 00:10:33.067 ' 00:10:33.067 11:21:28 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:33.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.067 --rc genhtml_branch_coverage=1 00:10:33.067 --rc genhtml_function_coverage=1 00:10:33.067 --rc genhtml_legend=1 00:10:33.067 --rc geninfo_all_blocks=1 00:10:33.067 --rc geninfo_unexecuted_blocks=1 00:10:33.067 00:10:33.067 ' 00:10:33.067 11:21:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:33.067 11:21:28 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:33.067 11:21:28 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.067 11:21:28 thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.067 ************************************ 00:10:33.067 START TEST thread_poller_perf 00:10:33.067 ************************************ 00:10:33.067 11:21:28 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:33.067 [2024-10-07 11:21:28.380247] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:33.067 [2024-10-07 11:21:28.380559] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59259 ] 00:10:33.067 [2024-10-07 11:21:28.522013] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.324 [2024-10-07 11:21:28.643784] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.324 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:34.258 [2024-10-07T11:21:29.781Z] ====================================== 00:10:34.259 [2024-10-07T11:21:29.782Z] busy:2210966926 (cyc) 00:10:34.259 [2024-10-07T11:21:29.782Z] total_run_count: 316000 00:10:34.259 [2024-10-07T11:21:29.782Z] tsc_hz: 2200000000 (cyc) 00:10:34.259 [2024-10-07T11:21:29.782Z] ====================================== 00:10:34.259 [2024-10-07T11:21:29.782Z] poller_cost: 6996 (cyc), 3180 (nsec) 00:10:34.259 00:10:34.259 ************************************ 00:10:34.259 END TEST thread_poller_perf 00:10:34.259 ************************************ 00:10:34.259 real 0m1.380s 00:10:34.259 user 0m1.212s 00:10:34.259 sys 0m0.060s 00:10:34.259 11:21:29 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.259 11:21:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:34.517 11:21:29 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:34.517 11:21:29 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:34.517 11:21:29 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.517 11:21:29 thread -- common/autotest_common.sh@10 -- # set +x 00:10:34.517 ************************************ 00:10:34.517 START TEST thread_poller_perf 00:10:34.517 ************************************ 00:10:34.517 11:21:29 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:34.517 [2024-10-07 11:21:29.813714] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:34.517 [2024-10-07 11:21:29.813830] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59289 ] 00:10:34.517 [2024-10-07 11:21:29.951159] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.775 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:34.775 [2024-10-07 11:21:30.062127] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.709 [2024-10-07T11:21:31.232Z] ====================================== 00:10:35.709 [2024-10-07T11:21:31.232Z] busy:2202267650 (cyc) 00:10:35.709 [2024-10-07T11:21:31.232Z] total_run_count: 4108000 00:10:35.709 [2024-10-07T11:21:31.232Z] tsc_hz: 2200000000 (cyc) 00:10:35.709 [2024-10-07T11:21:31.232Z] ====================================== 00:10:35.709 [2024-10-07T11:21:31.232Z] poller_cost: 536 (cyc), 243 (nsec) 00:10:35.709 ************************************ 00:10:35.709 END TEST thread_poller_perf 00:10:35.709 ************************************ 00:10:35.709 00:10:35.709 real 0m1.357s 00:10:35.709 user 0m1.184s 00:10:35.709 sys 0m0.065s 00:10:35.709 11:21:31 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.709 11:21:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:35.709 11:21:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:35.709 ************************************ 00:10:35.709 END TEST thread 00:10:35.709 ************************************ 00:10:35.709 00:10:35.709 real 0m3.034s 00:10:35.709 user 0m2.557s 00:10:35.709 sys 0m0.260s 00:10:35.709 11:21:31 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.709 11:21:31 thread -- common/autotest_common.sh@10 -- # set +x 00:10:35.967 11:21:31 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:35.967 11:21:31 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:35.967 11:21:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:35.967 11:21:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.967 11:21:31 -- common/autotest_common.sh@10 -- # set +x 00:10:35.967 ************************************ 00:10:35.967 START TEST app_cmdline 00:10:35.967 ************************************ 00:10:35.967 11:21:31 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:35.967 * Looking for test storage... 00:10:35.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:35.967 11:21:31 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:35.967 11:21:31 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:35.967 11:21:31 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:10:35.967 11:21:31 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:35.967 11:21:31 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.967 11:21:31 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.967 11:21:31 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.967 11:21:31 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.967 11:21:31 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.967 11:21:31 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.967 11:21:31 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.967 11:21:31 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.967 11:21:31 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.967 11:21:31 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.967 11:21:31 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.967 11:21:31 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:35.967 11:21:31 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:35.967 11:21:31 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.968 11:21:31 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.968 11:21:31 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:35.968 11:21:31 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:35.968 11:21:31 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.968 11:21:31 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:35.968 11:21:31 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.968 11:21:31 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:35.968 11:21:31 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:35.968 11:21:31 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.968 11:21:31 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:35.968 11:21:31 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.968 11:21:31 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.968 11:21:31 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.968 11:21:31 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:35.968 11:21:31 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.968 11:21:31 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:35.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.968 --rc genhtml_branch_coverage=1 00:10:35.968 --rc genhtml_function_coverage=1 00:10:35.968 --rc genhtml_legend=1 00:10:35.968 --rc geninfo_all_blocks=1 00:10:35.968 --rc geninfo_unexecuted_blocks=1 00:10:35.968 00:10:35.968 ' 00:10:35.968 11:21:31 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:35.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.968 --rc genhtml_branch_coverage=1 00:10:35.968 --rc genhtml_function_coverage=1 00:10:35.968 --rc genhtml_legend=1 00:10:35.968 --rc geninfo_all_blocks=1 00:10:35.968 --rc geninfo_unexecuted_blocks=1 00:10:35.968 00:10:35.968 ' 00:10:35.968 11:21:31 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:35.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.968 --rc genhtml_branch_coverage=1 00:10:35.968 --rc genhtml_function_coverage=1 00:10:35.968 --rc genhtml_legend=1 00:10:35.968 --rc geninfo_all_blocks=1 00:10:35.968 --rc geninfo_unexecuted_blocks=1 00:10:35.968 00:10:35.968 ' 00:10:35.968 11:21:31 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:35.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.968 --rc genhtml_branch_coverage=1 00:10:35.968 --rc genhtml_function_coverage=1 00:10:35.968 --rc genhtml_legend=1 00:10:35.968 --rc geninfo_all_blocks=1 00:10:35.968 --rc geninfo_unexecuted_blocks=1 00:10:35.968 00:10:35.968 ' 00:10:35.968 11:21:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:35.968 11:21:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59377 00:10:35.968 11:21:31 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:35.968 11:21:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59377 00:10:35.968 11:21:31 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59377 ']' 00:10:35.968 11:21:31 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.968 11:21:31 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.968 11:21:31 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.968 11:21:31 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.968 11:21:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:36.226 [2024-10-07 11:21:31.494573] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:36.226 [2024-10-07 11:21:31.494872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59377 ] 00:10:36.226 [2024-10-07 11:21:31.634784] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.484 [2024-10-07 11:21:31.760774] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.484 [2024-10-07 11:21:31.839961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:37.050 11:21:32 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:37.050 11:21:32 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:10:37.050 11:21:32 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:37.309 { 00:10:37.309 "version": "SPDK v25.01-pre git sha1 2a4f56c54", 00:10:37.309 "fields": { 00:10:37.309 "major": 25, 00:10:37.309 "minor": 1, 00:10:37.309 "patch": 0, 00:10:37.309 "suffix": "-pre", 00:10:37.309 "commit": "2a4f56c54" 00:10:37.309 } 00:10:37.309 } 00:10:37.309 11:21:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:37.309 11:21:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:37.309 11:21:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:37.309 11:21:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:37.309 11:21:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:37.309 11:21:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:37.309 11:21:32 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.309 11:21:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:37.309 11:21:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:37.309 11:21:32 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.309 11:21:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:37.309 11:21:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:37.309 11:21:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:37.309 11:21:32 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:10:37.309 11:21:32 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:37.309 11:21:32 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:37.309 11:21:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:37.309 11:21:32 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:37.309 11:21:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:37.309 11:21:32 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:37.309 11:21:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:37.309 11:21:32 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:37.309 11:21:32 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:37.309 11:21:32 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:37.567 request: 00:10:37.567 { 00:10:37.567 "method": "env_dpdk_get_mem_stats", 00:10:37.567 "req_id": 1 00:10:37.567 } 00:10:37.567 Got JSON-RPC error response 00:10:37.567 response: 00:10:37.567 { 00:10:37.567 "code": -32601, 00:10:37.567 "message": "Method not found" 00:10:37.567 } 00:10:37.567 11:21:33 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:10:37.567 11:21:33 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:37.567 11:21:33 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:37.567 11:21:33 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:37.568 11:21:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59377 00:10:37.568 11:21:33 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59377 ']' 00:10:37.568 11:21:33 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59377 00:10:37.568 11:21:33 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:10:37.826 11:21:33 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:37.826 11:21:33 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59377 00:10:37.826 killing process with pid 59377 00:10:37.826 11:21:33 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:37.826 11:21:33 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:37.826 11:21:33 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59377' 00:10:37.826 11:21:33 app_cmdline -- common/autotest_common.sh@969 -- # kill 59377 00:10:37.826 11:21:33 app_cmdline -- common/autotest_common.sh@974 -- # wait 59377 00:10:38.084 00:10:38.084 real 0m2.293s 00:10:38.084 user 0m2.833s 00:10:38.084 sys 0m0.514s 00:10:38.084 ************************************ 00:10:38.084 END TEST app_cmdline 00:10:38.084 ************************************ 00:10:38.084 11:21:33 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.084 11:21:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:38.084 11:21:33 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:38.084 11:21:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:38.084 11:21:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.084 11:21:33 -- common/autotest_common.sh@10 -- # set +x 00:10:38.084 ************************************ 00:10:38.084 START TEST version 00:10:38.084 ************************************ 00:10:38.084 11:21:33 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:38.342 * Looking for test storage... 00:10:38.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:38.342 11:21:33 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:38.342 11:21:33 version -- common/autotest_common.sh@1681 -- # lcov --version 00:10:38.342 11:21:33 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:38.342 11:21:33 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:38.342 11:21:33 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.342 11:21:33 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.342 11:21:33 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.342 11:21:33 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.342 11:21:33 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.342 11:21:33 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.342 11:21:33 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.342 11:21:33 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.342 11:21:33 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.342 11:21:33 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.342 11:21:33 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.342 11:21:33 version -- scripts/common.sh@344 -- # case "$op" in 00:10:38.342 11:21:33 version -- scripts/common.sh@345 -- # : 1 00:10:38.342 11:21:33 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.342 11:21:33 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.342 11:21:33 version -- scripts/common.sh@365 -- # decimal 1 00:10:38.342 11:21:33 version -- scripts/common.sh@353 -- # local d=1 00:10:38.342 11:21:33 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.342 11:21:33 version -- scripts/common.sh@355 -- # echo 1 00:10:38.342 11:21:33 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.342 11:21:33 version -- scripts/common.sh@366 -- # decimal 2 00:10:38.342 11:21:33 version -- scripts/common.sh@353 -- # local d=2 00:10:38.342 11:21:33 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.342 11:21:33 version -- scripts/common.sh@355 -- # echo 2 00:10:38.342 11:21:33 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.342 11:21:33 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.342 11:21:33 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.342 11:21:33 version -- scripts/common.sh@368 -- # return 0 00:10:38.342 11:21:33 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.342 11:21:33 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:38.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.342 --rc genhtml_branch_coverage=1 00:10:38.342 --rc genhtml_function_coverage=1 00:10:38.342 --rc genhtml_legend=1 00:10:38.342 --rc geninfo_all_blocks=1 00:10:38.342 --rc geninfo_unexecuted_blocks=1 00:10:38.342 00:10:38.342 ' 00:10:38.342 11:21:33 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:38.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.342 --rc genhtml_branch_coverage=1 00:10:38.342 --rc genhtml_function_coverage=1 00:10:38.342 --rc genhtml_legend=1 00:10:38.342 --rc geninfo_all_blocks=1 00:10:38.342 --rc geninfo_unexecuted_blocks=1 00:10:38.342 00:10:38.342 ' 00:10:38.342 11:21:33 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:38.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.342 --rc genhtml_branch_coverage=1 00:10:38.342 --rc genhtml_function_coverage=1 00:10:38.342 --rc genhtml_legend=1 00:10:38.342 --rc geninfo_all_blocks=1 00:10:38.342 --rc geninfo_unexecuted_blocks=1 00:10:38.342 00:10:38.342 ' 00:10:38.342 11:21:33 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:38.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.342 --rc genhtml_branch_coverage=1 00:10:38.342 --rc genhtml_function_coverage=1 00:10:38.342 --rc genhtml_legend=1 00:10:38.342 --rc geninfo_all_blocks=1 00:10:38.342 --rc geninfo_unexecuted_blocks=1 00:10:38.342 00:10:38.342 ' 00:10:38.342 11:21:33 version -- app/version.sh@17 -- # get_header_version major 00:10:38.342 11:21:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:38.342 11:21:33 version -- app/version.sh@14 -- # cut -f2 00:10:38.342 11:21:33 version -- app/version.sh@14 -- # tr -d '"' 00:10:38.342 11:21:33 version -- app/version.sh@17 -- # major=25 00:10:38.342 11:21:33 version -- app/version.sh@18 -- # get_header_version minor 00:10:38.342 11:21:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:38.342 11:21:33 version -- app/version.sh@14 -- # cut -f2 00:10:38.342 11:21:33 version -- app/version.sh@14 -- # tr -d '"' 00:10:38.342 11:21:33 version -- app/version.sh@18 -- # minor=1 00:10:38.342 11:21:33 version -- app/version.sh@19 -- # get_header_version patch 00:10:38.342 11:21:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:38.342 11:21:33 version -- app/version.sh@14 -- # cut -f2 00:10:38.342 11:21:33 version -- app/version.sh@14 -- # tr -d '"' 00:10:38.342 11:21:33 version -- app/version.sh@19 -- # patch=0 00:10:38.342 11:21:33 version -- app/version.sh@20 -- # get_header_version suffix 00:10:38.342 11:21:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:38.342 11:21:33 version -- app/version.sh@14 -- # cut -f2 00:10:38.342 11:21:33 version -- app/version.sh@14 -- # tr -d '"' 00:10:38.342 11:21:33 version -- app/version.sh@20 -- # suffix=-pre 00:10:38.342 11:21:33 version -- app/version.sh@22 -- # version=25.1 00:10:38.342 11:21:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:38.342 11:21:33 version -- app/version.sh@28 -- # version=25.1rc0 00:10:38.342 11:21:33 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:38.342 11:21:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:38.342 11:21:33 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:38.342 11:21:33 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:38.342 ************************************ 00:10:38.342 END TEST version 00:10:38.342 ************************************ 00:10:38.342 00:10:38.342 real 0m0.252s 00:10:38.342 user 0m0.175s 00:10:38.342 sys 0m0.112s 00:10:38.342 11:21:33 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.342 11:21:33 version -- common/autotest_common.sh@10 -- # set +x 00:10:38.600 11:21:33 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:38.601 11:21:33 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:38.601 11:21:33 -- spdk/autotest.sh@194 -- # uname -s 00:10:38.601 11:21:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:38.601 11:21:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:38.601 11:21:33 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:10:38.601 11:21:33 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:10:38.601 11:21:33 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:10:38.601 11:21:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:38.601 11:21:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.601 11:21:33 -- common/autotest_common.sh@10 -- # set +x 00:10:38.601 ************************************ 00:10:38.601 START TEST spdk_dd 00:10:38.601 ************************************ 00:10:38.601 11:21:33 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:10:38.601 * Looking for test storage... 00:10:38.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:38.601 11:21:33 spdk_dd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:38.601 11:21:33 spdk_dd -- common/autotest_common.sh@1681 -- # lcov --version 00:10:38.601 11:21:33 spdk_dd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:38.601 11:21:34 spdk_dd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@345 -- # : 1 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@368 -- # return 0 00:10:38.601 11:21:34 spdk_dd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.601 11:21:34 spdk_dd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:38.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.601 --rc genhtml_branch_coverage=1 00:10:38.601 --rc genhtml_function_coverage=1 00:10:38.601 --rc genhtml_legend=1 00:10:38.601 --rc geninfo_all_blocks=1 00:10:38.601 --rc geninfo_unexecuted_blocks=1 00:10:38.601 00:10:38.601 ' 00:10:38.601 11:21:34 spdk_dd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:38.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.601 --rc genhtml_branch_coverage=1 00:10:38.601 --rc genhtml_function_coverage=1 00:10:38.601 --rc genhtml_legend=1 00:10:38.601 --rc geninfo_all_blocks=1 00:10:38.601 --rc geninfo_unexecuted_blocks=1 00:10:38.601 00:10:38.601 ' 00:10:38.601 11:21:34 spdk_dd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:38.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.601 --rc genhtml_branch_coverage=1 00:10:38.601 --rc genhtml_function_coverage=1 00:10:38.601 --rc genhtml_legend=1 00:10:38.601 --rc geninfo_all_blocks=1 00:10:38.601 --rc geninfo_unexecuted_blocks=1 00:10:38.601 00:10:38.601 ' 00:10:38.601 11:21:34 spdk_dd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:38.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.601 --rc genhtml_branch_coverage=1 00:10:38.601 --rc genhtml_function_coverage=1 00:10:38.601 --rc genhtml_legend=1 00:10:38.601 --rc geninfo_all_blocks=1 00:10:38.601 --rc geninfo_unexecuted_blocks=1 00:10:38.601 00:10:38.601 ' 00:10:38.601 11:21:34 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.601 11:21:34 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.601 11:21:34 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.601 11:21:34 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.601 11:21:34 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.601 11:21:34 spdk_dd -- paths/export.sh@5 -- # export PATH 00:10:38.601 11:21:34 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.601 11:21:34 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:39.169 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:39.169 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:39.169 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:39.169 11:21:34 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:10:39.169 11:21:34 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:10:39.169 11:21:34 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:10:39.169 11:21:34 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:10:39.169 11:21:34 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:39.169 11:21:34 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:39.169 11:21:34 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:39.169 11:21:34 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:10:39.169 11:21:34 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:39.169 11:21:34 spdk_dd -- scripts/common.sh@233 -- # local class 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@235 -- # local progif 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@236 -- # class=01 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@18 -- # local i 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@27 -- # return 0 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@18 -- # local i 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@27 -- # return 0 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:10:39.170 11:21:34 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:10:39.170 11:21:34 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@139 -- # local lib 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.15.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.170 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.2 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:10:39.171 * spdk_dd linked to liburing 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:10:39.171 11:21:34 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:39.171 11:21:34 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:39.172 11:21:34 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:39.172 11:21:34 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:39.172 11:21:34 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:39.172 11:21:34 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:39.172 11:21:34 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:39.172 11:21:34 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:39.172 11:21:34 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:39.172 11:21:34 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:39.172 11:21:34 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:39.172 11:21:34 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:39.172 11:21:34 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:10:39.172 11:21:34 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:10:39.172 11:21:34 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:10:39.172 11:21:34 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:10:39.172 11:21:34 spdk_dd -- dd/common.sh@153 -- # return 0 00:10:39.172 11:21:34 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:10:39.172 11:21:34 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:10:39.172 11:21:34 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:39.172 11:21:34 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.172 11:21:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:39.172 ************************************ 00:10:39.172 START TEST spdk_dd_basic_rw 00:10:39.172 ************************************ 00:10:39.172 11:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:10:39.172 * Looking for test storage... 00:10:39.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:39.172 11:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:39.172 11:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lcov --version 00:10:39.172 11:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:39.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.431 --rc genhtml_branch_coverage=1 00:10:39.431 --rc genhtml_function_coverage=1 00:10:39.431 --rc genhtml_legend=1 00:10:39.431 --rc geninfo_all_blocks=1 00:10:39.431 --rc geninfo_unexecuted_blocks=1 00:10:39.431 00:10:39.431 ' 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:39.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.431 --rc genhtml_branch_coverage=1 00:10:39.431 --rc genhtml_function_coverage=1 00:10:39.431 --rc genhtml_legend=1 00:10:39.431 --rc geninfo_all_blocks=1 00:10:39.431 --rc geninfo_unexecuted_blocks=1 00:10:39.431 00:10:39.431 ' 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:39.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.431 --rc genhtml_branch_coverage=1 00:10:39.431 --rc genhtml_function_coverage=1 00:10:39.431 --rc genhtml_legend=1 00:10:39.431 --rc geninfo_all_blocks=1 00:10:39.431 --rc geninfo_unexecuted_blocks=1 00:10:39.431 00:10:39.431 ' 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:39.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.431 --rc genhtml_branch_coverage=1 00:10:39.431 --rc genhtml_function_coverage=1 00:10:39.431 --rc genhtml_legend=1 00:10:39.431 --rc geninfo_all_blocks=1 00:10:39.431 --rc geninfo_unexecuted_blocks=1 00:10:39.431 00:10:39.431 ' 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:10:39.431 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:10:39.696 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:10:39.696 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:39.697 ************************************ 00:10:39.697 START TEST dd_bs_lt_native_bs 00:10:39.697 ************************************ 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:39.697 11:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:39.697 { 00:10:39.697 "subsystems": [ 00:10:39.697 { 00:10:39.697 "subsystem": "bdev", 00:10:39.697 "config": [ 00:10:39.697 { 00:10:39.697 "params": { 00:10:39.697 "trtype": "pcie", 00:10:39.697 "traddr": "0000:00:10.0", 00:10:39.697 "name": "Nvme0" 00:10:39.697 }, 00:10:39.697 "method": "bdev_nvme_attach_controller" 00:10:39.697 }, 00:10:39.697 { 00:10:39.697 "method": "bdev_wait_for_examine" 00:10:39.697 } 00:10:39.697 ] 00:10:39.697 } 00:10:39.697 ] 00:10:39.697 } 00:10:39.697 [2024-10-07 11:21:35.024151] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:39.697 [2024-10-07 11:21:35.024233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59723 ] 00:10:39.697 [2024-10-07 11:21:35.162191] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.972 [2024-10-07 11:21:35.293457] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.972 [2024-10-07 11:21:35.353646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.972 [2024-10-07 11:21:35.475838] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:10:39.972 [2024-10-07 11:21:35.475919] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:40.231 [2024-10-07 11:21:35.601439] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:40.231 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:10:40.231 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:40.231 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:10:40.231 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:10:40.231 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:10:40.231 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:40.231 00:10:40.231 real 0m0.730s 00:10:40.231 user 0m0.510s 00:10:40.231 sys 0m0.175s 00:10:40.231 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.231 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:10:40.231 ************************************ 00:10:40.231 END TEST dd_bs_lt_native_bs 00:10:40.231 ************************************ 00:10:40.231 11:21:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:10:40.231 11:21:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:40.231 11:21:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:40.231 11:21:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:40.231 ************************************ 00:10:40.231 START TEST dd_rw 00:10:40.231 ************************************ 00:10:40.489 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:10:40.490 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:10:40.490 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:10:40.490 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:10:40.490 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:10:40.490 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:10:40.490 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:10:40.490 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:10:40.490 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:10:40.490 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:10:40.490 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:10:40.490 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:10:40.490 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:40.490 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:10:40.490 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:10:40.490 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:10:40.490 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:10:40.490 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:40.490 11:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:41.057 11:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:10:41.057 11:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:41.057 11:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:41.057 11:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:41.057 [2024-10-07 11:21:36.464808] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:41.057 [2024-10-07 11:21:36.464907] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59765 ] 00:10:41.057 { 00:10:41.057 "subsystems": [ 00:10:41.057 { 00:10:41.057 "subsystem": "bdev", 00:10:41.057 "config": [ 00:10:41.057 { 00:10:41.057 "params": { 00:10:41.057 "trtype": "pcie", 00:10:41.057 "traddr": "0000:00:10.0", 00:10:41.057 "name": "Nvme0" 00:10:41.057 }, 00:10:41.057 "method": "bdev_nvme_attach_controller" 00:10:41.057 }, 00:10:41.057 { 00:10:41.057 "method": "bdev_wait_for_examine" 00:10:41.057 } 00:10:41.057 ] 00:10:41.057 } 00:10:41.057 ] 00:10:41.057 } 00:10:41.316 [2024-10-07 11:21:36.597746] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.316 [2024-10-07 11:21:36.711576] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.316 [2024-10-07 11:21:36.764408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:41.575  [2024-10-07T11:21:37.098Z] Copying: 60/60 [kB] (average 29 MBps) 00:10:41.575 00:10:41.833 11:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:10:41.833 11:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:41.833 11:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:41.833 11:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:41.833 [2024-10-07 11:21:37.154846] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:41.833 [2024-10-07 11:21:37.154994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59773 ] 00:10:41.833 { 00:10:41.833 "subsystems": [ 00:10:41.833 { 00:10:41.833 "subsystem": "bdev", 00:10:41.833 "config": [ 00:10:41.833 { 00:10:41.833 "params": { 00:10:41.833 "trtype": "pcie", 00:10:41.833 "traddr": "0000:00:10.0", 00:10:41.833 "name": "Nvme0" 00:10:41.833 }, 00:10:41.833 "method": "bdev_nvme_attach_controller" 00:10:41.833 }, 00:10:41.833 { 00:10:41.833 "method": "bdev_wait_for_examine" 00:10:41.833 } 00:10:41.833 ] 00:10:41.833 } 00:10:41.833 ] 00:10:41.833 } 00:10:41.833 [2024-10-07 11:21:37.294168] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.091 [2024-10-07 11:21:37.412249] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.091 [2024-10-07 11:21:37.467124] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:42.091  [2024-10-07T11:21:37.879Z] Copying: 60/60 [kB] (average 19 MBps) 00:10:42.356 00:10:42.356 11:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:42.356 11:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:10:42.356 11:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:42.356 11:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:42.356 11:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:10:42.356 11:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:42.356 11:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:42.356 11:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:42.356 11:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:42.356 11:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:42.356 11:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:42.356 { 00:10:42.356 "subsystems": [ 00:10:42.356 { 00:10:42.356 "subsystem": "bdev", 00:10:42.356 "config": [ 00:10:42.356 { 00:10:42.356 "params": { 00:10:42.356 "trtype": "pcie", 00:10:42.356 "traddr": "0000:00:10.0", 00:10:42.356 "name": "Nvme0" 00:10:42.356 }, 00:10:42.356 "method": "bdev_nvme_attach_controller" 00:10:42.356 }, 00:10:42.356 { 00:10:42.356 "method": "bdev_wait_for_examine" 00:10:42.356 } 00:10:42.356 ] 00:10:42.356 } 00:10:42.356 ] 00:10:42.356 } 00:10:42.356 [2024-10-07 11:21:37.871409] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:42.356 [2024-10-07 11:21:37.871516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59794 ] 00:10:42.629 [2024-10-07 11:21:38.011121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.629 [2024-10-07 11:21:38.128530] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.888 [2024-10-07 11:21:38.184408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:42.888  [2024-10-07T11:21:38.669Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:10:43.146 00:10:43.146 11:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:43.146 11:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:10:43.146 11:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:10:43.146 11:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:10:43.146 11:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:10:43.146 11:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:43.146 11:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:43.712 11:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:10:43.712 11:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:43.712 11:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:43.712 11:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:43.712 [2024-10-07 11:21:39.200479] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:43.712 [2024-10-07 11:21:39.200603] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59813 ] 00:10:43.712 { 00:10:43.712 "subsystems": [ 00:10:43.712 { 00:10:43.712 "subsystem": "bdev", 00:10:43.712 "config": [ 00:10:43.712 { 00:10:43.712 "params": { 00:10:43.712 "trtype": "pcie", 00:10:43.712 "traddr": "0000:00:10.0", 00:10:43.712 "name": "Nvme0" 00:10:43.712 }, 00:10:43.712 "method": "bdev_nvme_attach_controller" 00:10:43.712 }, 00:10:43.712 { 00:10:43.712 "method": "bdev_wait_for_examine" 00:10:43.712 } 00:10:43.712 ] 00:10:43.712 } 00:10:43.712 ] 00:10:43.712 } 00:10:43.971 [2024-10-07 11:21:39.338438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.971 [2024-10-07 11:21:39.467261] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.229 [2024-10-07 11:21:39.523367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:44.229  [2024-10-07T11:21:40.010Z] Copying: 60/60 [kB] (average 58 MBps) 00:10:44.487 00:10:44.487 11:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:10:44.487 11:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:44.487 11:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:44.487 11:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:44.487 { 00:10:44.487 "subsystems": [ 00:10:44.487 { 00:10:44.487 "subsystem": "bdev", 00:10:44.487 "config": [ 00:10:44.487 { 00:10:44.487 "params": { 00:10:44.487 "trtype": "pcie", 00:10:44.487 "traddr": "0000:00:10.0", 00:10:44.487 "name": "Nvme0" 00:10:44.487 }, 00:10:44.487 "method": "bdev_nvme_attach_controller" 00:10:44.487 }, 00:10:44.487 { 00:10:44.487 "method": "bdev_wait_for_examine" 00:10:44.487 } 00:10:44.487 ] 00:10:44.487 } 00:10:44.487 ] 00:10:44.487 } 00:10:44.488 [2024-10-07 11:21:39.990324] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:44.488 [2024-10-07 11:21:39.990504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59832 ] 00:10:44.746 [2024-10-07 11:21:40.134485] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.746 [2024-10-07 11:21:40.253808] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.009 [2024-10-07 11:21:40.307848] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:45.009  [2024-10-07T11:21:40.790Z] Copying: 60/60 [kB] (average 58 MBps) 00:10:45.267 00:10:45.267 11:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:45.267 11:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:10:45.267 11:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:45.267 11:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:45.267 11:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:10:45.267 11:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:45.267 11:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:45.267 11:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:45.267 11:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:45.267 11:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:45.267 11:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:45.267 { 00:10:45.267 "subsystems": [ 00:10:45.267 { 00:10:45.267 "subsystem": "bdev", 00:10:45.267 "config": [ 00:10:45.267 { 00:10:45.267 "params": { 00:10:45.267 "trtype": "pcie", 00:10:45.267 "traddr": "0000:00:10.0", 00:10:45.267 "name": "Nvme0" 00:10:45.267 }, 00:10:45.267 "method": "bdev_nvme_attach_controller" 00:10:45.267 }, 00:10:45.267 { 00:10:45.267 "method": "bdev_wait_for_examine" 00:10:45.267 } 00:10:45.267 ] 00:10:45.267 } 00:10:45.267 ] 00:10:45.267 } 00:10:45.267 [2024-10-07 11:21:40.707195] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:45.268 [2024-10-07 11:21:40.707301] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59853 ] 00:10:45.526 [2024-10-07 11:21:40.843553] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.526 [2024-10-07 11:21:40.954526] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.526 [2024-10-07 11:21:41.011238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:45.783  [2024-10-07T11:21:41.565Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:46.042 00:10:46.042 11:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:10:46.042 11:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:46.042 11:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:10:46.042 11:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:10:46.042 11:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:10:46.042 11:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:10:46.042 11:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:46.042 11:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:46.609 11:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:10:46.609 11:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:46.609 11:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:46.609 11:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:46.609 [2024-10-07 11:21:41.985149] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:46.609 [2024-10-07 11:21:41.985256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59872 ] 00:10:46.609 { 00:10:46.609 "subsystems": [ 00:10:46.610 { 00:10:46.610 "subsystem": "bdev", 00:10:46.610 "config": [ 00:10:46.610 { 00:10:46.610 "params": { 00:10:46.610 "trtype": "pcie", 00:10:46.610 "traddr": "0000:00:10.0", 00:10:46.610 "name": "Nvme0" 00:10:46.610 }, 00:10:46.610 "method": "bdev_nvme_attach_controller" 00:10:46.610 }, 00:10:46.610 { 00:10:46.610 "method": "bdev_wait_for_examine" 00:10:46.610 } 00:10:46.610 ] 00:10:46.610 } 00:10:46.610 ] 00:10:46.610 } 00:10:46.610 [2024-10-07 11:21:42.120709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.867 [2024-10-07 11:21:42.229576] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.867 [2024-10-07 11:21:42.283081] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:46.867  [2024-10-07T11:21:42.649Z] Copying: 56/56 [kB] (average 27 MBps) 00:10:47.126 00:10:47.126 11:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:10:47.126 11:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:47.126 11:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:47.126 11:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:47.385 { 00:10:47.385 "subsystems": [ 00:10:47.385 { 00:10:47.385 "subsystem": "bdev", 00:10:47.385 "config": [ 00:10:47.385 { 00:10:47.385 "params": { 00:10:47.385 "trtype": "pcie", 00:10:47.385 "traddr": "0000:00:10.0", 00:10:47.385 "name": "Nvme0" 00:10:47.385 }, 00:10:47.385 "method": "bdev_nvme_attach_controller" 00:10:47.385 }, 00:10:47.385 { 00:10:47.385 "method": "bdev_wait_for_examine" 00:10:47.385 } 00:10:47.385 ] 00:10:47.385 } 00:10:47.385 ] 00:10:47.385 } 00:10:47.385 [2024-10-07 11:21:42.677308] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:47.385 [2024-10-07 11:21:42.677427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59887 ] 00:10:47.385 [2024-10-07 11:21:42.816461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.643 [2024-10-07 11:21:42.931188] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.643 [2024-10-07 11:21:42.987004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.643  [2024-10-07T11:21:43.424Z] Copying: 56/56 [kB] (average 27 MBps) 00:10:47.901 00:10:47.901 11:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:47.901 11:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:10:47.901 11:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:47.901 11:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:47.901 11:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:10:47.901 11:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:47.901 11:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:47.901 11:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:47.901 11:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:47.901 11:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:47.901 11:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:47.901 { 00:10:47.901 "subsystems": [ 00:10:47.901 { 00:10:47.901 "subsystem": "bdev", 00:10:47.901 "config": [ 00:10:47.901 { 00:10:47.901 "params": { 00:10:47.901 "trtype": "pcie", 00:10:47.901 "traddr": "0000:00:10.0", 00:10:47.901 "name": "Nvme0" 00:10:47.901 }, 00:10:47.901 "method": "bdev_nvme_attach_controller" 00:10:47.901 }, 00:10:47.901 { 00:10:47.901 "method": "bdev_wait_for_examine" 00:10:47.901 } 00:10:47.901 ] 00:10:47.901 } 00:10:47.901 ] 00:10:47.901 } 00:10:47.901 [2024-10-07 11:21:43.385494] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:47.901 [2024-10-07 11:21:43.385596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59901 ] 00:10:48.160 [2024-10-07 11:21:43.524984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.160 [2024-10-07 11:21:43.642569] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.419 [2024-10-07 11:21:43.697923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.419  [2024-10-07T11:21:44.200Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:10:48.677 00:10:48.677 11:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:48.677 11:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:10:48.677 11:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:10:48.677 11:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:10:48.677 11:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:10:48.677 11:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:48.677 11:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:49.243 11:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:10:49.243 11:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:49.243 11:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:49.243 11:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:49.243 [2024-10-07 11:21:44.717569] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:49.243 [2024-10-07 11:21:44.717677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59928 ] 00:10:49.243 { 00:10:49.243 "subsystems": [ 00:10:49.243 { 00:10:49.243 "subsystem": "bdev", 00:10:49.243 "config": [ 00:10:49.243 { 00:10:49.243 "params": { 00:10:49.243 "trtype": "pcie", 00:10:49.243 "traddr": "0000:00:10.0", 00:10:49.243 "name": "Nvme0" 00:10:49.243 }, 00:10:49.243 "method": "bdev_nvme_attach_controller" 00:10:49.243 }, 00:10:49.243 { 00:10:49.243 "method": "bdev_wait_for_examine" 00:10:49.243 } 00:10:49.243 ] 00:10:49.243 } 00:10:49.243 ] 00:10:49.243 } 00:10:49.502 [2024-10-07 11:21:44.856985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.502 [2024-10-07 11:21:44.983719] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.760 [2024-10-07 11:21:45.042098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:49.760  [2024-10-07T11:21:45.541Z] Copying: 56/56 [kB] (average 54 MBps) 00:10:50.018 00:10:50.018 11:21:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:10:50.018 11:21:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:50.018 11:21:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:50.019 11:21:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:50.019 { 00:10:50.019 "subsystems": [ 00:10:50.019 { 00:10:50.019 "subsystem": "bdev", 00:10:50.019 "config": [ 00:10:50.019 { 00:10:50.019 "params": { 00:10:50.019 "trtype": "pcie", 00:10:50.019 "traddr": "0000:00:10.0", 00:10:50.019 "name": "Nvme0" 00:10:50.019 }, 00:10:50.019 "method": "bdev_nvme_attach_controller" 00:10:50.019 }, 00:10:50.019 { 00:10:50.019 "method": "bdev_wait_for_examine" 00:10:50.019 } 00:10:50.019 ] 00:10:50.019 } 00:10:50.019 ] 00:10:50.019 } 00:10:50.019 [2024-10-07 11:21:45.458922] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:50.019 [2024-10-07 11:21:45.459009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59941 ] 00:10:50.277 [2024-10-07 11:21:45.593738] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.277 [2024-10-07 11:21:45.712484] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.277 [2024-10-07 11:21:45.767226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:50.535  [2024-10-07T11:21:46.316Z] Copying: 56/56 [kB] (average 54 MBps) 00:10:50.793 00:10:50.793 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:50.793 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:10:50.793 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:50.793 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:50.793 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:10:50.793 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:50.793 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:50.793 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:50.793 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:50.793 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:50.793 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:50.793 [2024-10-07 11:21:46.149223] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:50.793 [2024-10-07 11:21:46.149337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59962 ] 00:10:50.793 { 00:10:50.793 "subsystems": [ 00:10:50.793 { 00:10:50.793 "subsystem": "bdev", 00:10:50.793 "config": [ 00:10:50.793 { 00:10:50.793 "params": { 00:10:50.793 "trtype": "pcie", 00:10:50.793 "traddr": "0000:00:10.0", 00:10:50.793 "name": "Nvme0" 00:10:50.793 }, 00:10:50.793 "method": "bdev_nvme_attach_controller" 00:10:50.793 }, 00:10:50.793 { 00:10:50.793 "method": "bdev_wait_for_examine" 00:10:50.793 } 00:10:50.793 ] 00:10:50.793 } 00:10:50.793 ] 00:10:50.793 } 00:10:50.793 [2024-10-07 11:21:46.280806] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.052 [2024-10-07 11:21:46.389237] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.052 [2024-10-07 11:21:46.445285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:51.052  [2024-10-07T11:21:46.833Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:10:51.310 00:10:51.310 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:10:51.310 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:51.310 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:10:51.310 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:10:51.310 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:10:51.310 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:10:51.310 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:51.310 11:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:51.875 11:21:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:10:51.876 11:21:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:51.876 11:21:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:51.876 11:21:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:52.133 [2024-10-07 11:21:47.400303] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:52.133 [2024-10-07 11:21:47.400443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59981 ] 00:10:52.133 { 00:10:52.133 "subsystems": [ 00:10:52.134 { 00:10:52.134 "subsystem": "bdev", 00:10:52.134 "config": [ 00:10:52.134 { 00:10:52.134 "params": { 00:10:52.134 "trtype": "pcie", 00:10:52.134 "traddr": "0000:00:10.0", 00:10:52.134 "name": "Nvme0" 00:10:52.134 }, 00:10:52.134 "method": "bdev_nvme_attach_controller" 00:10:52.134 }, 00:10:52.134 { 00:10:52.134 "method": "bdev_wait_for_examine" 00:10:52.134 } 00:10:52.134 ] 00:10:52.134 } 00:10:52.134 ] 00:10:52.134 } 00:10:52.134 [2024-10-07 11:21:47.541374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.434 [2024-10-07 11:21:47.675092] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.434 [2024-10-07 11:21:47.735370] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:52.434  [2024-10-07T11:21:48.215Z] Copying: 48/48 [kB] (average 46 MBps) 00:10:52.692 00:10:52.692 11:21:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:10:52.692 11:21:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:52.692 11:21:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:52.692 11:21:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:52.692 { 00:10:52.692 "subsystems": [ 00:10:52.692 { 00:10:52.692 "subsystem": "bdev", 00:10:52.692 "config": [ 00:10:52.692 { 00:10:52.692 "params": { 00:10:52.692 "trtype": "pcie", 00:10:52.692 "traddr": "0000:00:10.0", 00:10:52.692 "name": "Nvme0" 00:10:52.692 }, 00:10:52.692 "method": "bdev_nvme_attach_controller" 00:10:52.692 }, 00:10:52.692 { 00:10:52.692 "method": "bdev_wait_for_examine" 00:10:52.692 } 00:10:52.692 ] 00:10:52.692 } 00:10:52.692 ] 00:10:52.692 } 00:10:52.692 [2024-10-07 11:21:48.140467] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:52.692 [2024-10-07 11:21:48.140582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60001 ] 00:10:52.950 [2024-10-07 11:21:48.281073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.950 [2024-10-07 11:21:48.396973] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.950 [2024-10-07 11:21:48.452282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:53.207  [2024-10-07T11:21:48.988Z] Copying: 48/48 [kB] (average 46 MBps) 00:10:53.465 00:10:53.465 11:21:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:53.465 11:21:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:10:53.465 11:21:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:53.465 11:21:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:53.465 11:21:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:10:53.465 11:21:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:53.465 11:21:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:53.465 11:21:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:53.465 11:21:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:53.465 11:21:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:53.465 11:21:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:53.465 { 00:10:53.465 "subsystems": [ 00:10:53.465 { 00:10:53.465 "subsystem": "bdev", 00:10:53.465 "config": [ 00:10:53.465 { 00:10:53.465 "params": { 00:10:53.465 "trtype": "pcie", 00:10:53.465 "traddr": "0000:00:10.0", 00:10:53.465 "name": "Nvme0" 00:10:53.465 }, 00:10:53.465 "method": "bdev_nvme_attach_controller" 00:10:53.465 }, 00:10:53.465 { 00:10:53.465 "method": "bdev_wait_for_examine" 00:10:53.465 } 00:10:53.465 ] 00:10:53.465 } 00:10:53.465 ] 00:10:53.465 } 00:10:53.465 [2024-10-07 11:21:48.841535] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:53.465 [2024-10-07 11:21:48.841610] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60017 ] 00:10:53.465 [2024-10-07 11:21:48.977031] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.724 [2024-10-07 11:21:49.090694] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.724 [2024-10-07 11:21:49.145842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:53.982  [2024-10-07T11:21:49.505Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:10:53.982 00:10:53.982 11:21:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:53.982 11:21:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:10:53.982 11:21:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:10:53.982 11:21:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:10:53.982 11:21:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:10:53.982 11:21:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:53.982 11:21:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:54.549 11:21:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:10:54.549 11:21:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:54.549 11:21:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:54.549 11:21:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:54.549 { 00:10:54.549 "subsystems": [ 00:10:54.549 { 00:10:54.549 "subsystem": "bdev", 00:10:54.549 "config": [ 00:10:54.549 { 00:10:54.549 "params": { 00:10:54.549 "trtype": "pcie", 00:10:54.549 "traddr": "0000:00:10.0", 00:10:54.549 "name": "Nvme0" 00:10:54.549 }, 00:10:54.549 "method": "bdev_nvme_attach_controller" 00:10:54.549 }, 00:10:54.549 { 00:10:54.549 "method": "bdev_wait_for_examine" 00:10:54.549 } 00:10:54.549 ] 00:10:54.549 } 00:10:54.549 ] 00:10:54.549 } 00:10:54.549 [2024-10-07 11:21:49.999595] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:54.549 [2024-10-07 11:21:49.999692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60036 ] 00:10:54.806 [2024-10-07 11:21:50.138172] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.806 [2024-10-07 11:21:50.253207] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.806 [2024-10-07 11:21:50.308009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:55.063  [2024-10-07T11:21:50.845Z] Copying: 48/48 [kB] (average 46 MBps) 00:10:55.322 00:10:55.322 11:21:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:10:55.322 11:21:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:55.322 11:21:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:55.322 11:21:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:55.322 [2024-10-07 11:21:50.674402] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:55.322 [2024-10-07 11:21:50.674529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60049 ] 00:10:55.322 { 00:10:55.322 "subsystems": [ 00:10:55.322 { 00:10:55.322 "subsystem": "bdev", 00:10:55.322 "config": [ 00:10:55.322 { 00:10:55.322 "params": { 00:10:55.322 "trtype": "pcie", 00:10:55.322 "traddr": "0000:00:10.0", 00:10:55.322 "name": "Nvme0" 00:10:55.322 }, 00:10:55.322 "method": "bdev_nvme_attach_controller" 00:10:55.322 }, 00:10:55.322 { 00:10:55.322 "method": "bdev_wait_for_examine" 00:10:55.322 } 00:10:55.322 ] 00:10:55.322 } 00:10:55.322 ] 00:10:55.322 } 00:10:55.322 [2024-10-07 11:21:50.809058] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.579 [2024-10-07 11:21:50.915051] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.579 [2024-10-07 11:21:50.968761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:55.579  [2024-10-07T11:21:51.360Z] Copying: 48/48 [kB] (average 46 MBps) 00:10:55.837 00:10:55.837 11:21:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:55.837 11:21:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:10:55.837 11:21:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:55.837 11:21:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:55.837 11:21:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:10:55.837 11:21:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:55.837 11:21:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:55.837 11:21:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:55.837 11:21:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:55.837 11:21:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:55.837 11:21:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:56.095 [2024-10-07 11:21:51.365946] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:56.095 [2024-10-07 11:21:51.366212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60070 ] 00:10:56.095 { 00:10:56.095 "subsystems": [ 00:10:56.095 { 00:10:56.095 "subsystem": "bdev", 00:10:56.095 "config": [ 00:10:56.095 { 00:10:56.095 "params": { 00:10:56.095 "trtype": "pcie", 00:10:56.095 "traddr": "0000:00:10.0", 00:10:56.095 "name": "Nvme0" 00:10:56.095 }, 00:10:56.095 "method": "bdev_nvme_attach_controller" 00:10:56.095 }, 00:10:56.095 { 00:10:56.095 "method": "bdev_wait_for_examine" 00:10:56.095 } 00:10:56.095 ] 00:10:56.095 } 00:10:56.095 ] 00:10:56.095 } 00:10:56.095 [2024-10-07 11:21:51.497817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.095 [2024-10-07 11:21:51.602732] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.352 [2024-10-07 11:21:51.657665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:56.352  [2024-10-07T11:21:52.133Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:56.610 00:10:56.610 00:10:56.610 real 0m16.228s 00:10:56.610 user 0m12.098s 00:10:56.610 sys 0m5.624s 00:10:56.610 ************************************ 00:10:56.610 END TEST dd_rw 00:10:56.610 ************************************ 00:10:56.610 11:21:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.610 11:21:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:56.610 11:21:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:10:56.610 11:21:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:56.610 11:21:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.610 11:21:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:56.610 ************************************ 00:10:56.610 START TEST dd_rw_offset 00:10:56.610 ************************************ 00:10:56.610 11:21:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:10:56.610 11:21:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:10:56.610 11:21:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:10:56.610 11:21:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:10:56.610 11:21:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:56.610 11:21:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:10:56.611 11:21:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=v6lwkt0gk9xt88nwseftsh5ukirxqoojd98ta1z1j9a2ansrrly2osvu5kuuh3t81bjcs94y6h1gpoq9pjv77cjqp8l7sd1zlq8jtbhq76lnp08798wt084kfsd7x3txj34d3amyumx02xamh4hqpap08etwmxh42xrc4gufxiti5csh6gvaw2lqb2hld5hfnzfvovj06aesyi2usafrmtbougih26yiaopiaeqwwvrc8uvg1rmng79xfxjmelez60pekze2csreeeqhee180c8fwp0aa9p2q9i24yr5z36wva3rqhiom7y38sqaf6fjm2phgu8xcqifhp3irb7ycz89qidhcvbj3uh4kbeajfr8r6g3f0luzfni9o3cgg2cqlp446ujljcgxwdky8xukr2s02rx25rv0fr5v617bem9hyb4o2jx3kttm0m2upswmrj9yzqo268d6ct73uaojorpb83wufhh73qxx6so8nwf70ybn7yulgfp1qq9915qt4qbk9n5mrcscutr4b6yq22wqg1btgcstoib3l53hq4230u3xxywedz4kmux4nx02reazcgkk7rog0gnd44x3byvw8nv9ybws5cseck4wq6ealbpv6lo3m783nmqihx1a6ydjxx9cygx57yc9yz3tdlzyv3l6vgts3ikixw9bzdifrce3fol36dro52p0zl1kip9z6u4fwvej62efkmc1mtztrhzymwz2dd1gukos2sc0oecqnmalzc1blttnphuagbtq1xx4ibjwyrex341sckysyeeqfkjj3w6l21norkqas2365fk0fiyezfdvysei32303coysfke7zt19mafis87wa4l2o8liduj2vlag18uhtatxbd63pixekxudsssn6tn7hu058tv9eypcvkpx7wqokicyr47vzhfdly0yz06py0313x17hu8qkt7f4h33rbvf0cq0jcznsugy75azvm2wc8gzblxrlynca42e3fx6go3xbzojokdergh6wy475wsqnri4je1qxn894074fi5hfwgqw1obu587yeojgp9lcjx6ykyksxpigv1mexj55i9gkbqvfn9twexcegkra5aqdut8caqo79r8v0eanod8t0e0ba6zrxs7ouua5a53ut5yan8o6h8651p5lt7augur0gjp51zffg9e2ue6fqgl55wfksy2oglpv33ggxgl54lqbdvehlg6bwe2ombkja0npxuva3iunm482ony8ksaqoklw1gi335a4jb746cl8b19hryp654hkg6kf8mbr4qsop1rliqc6drut464hcqcqreuqo4b0ll6wdvwz0y39e1jg0tq3ydwh3do9nkdt9t3he45r29spb5lh4h547clboiu1f6y6lh4t70lug59mzowzue6g03szopqmzetjioj5cumx60i5iefyysja3y012e88y6cecc5uhcdl9rev09p6vflupjv1iul9jnv2ovjn5nr8fekcsikns7esxsfylndflcd8933ldrw8u3ylelgtq4joff5zi0qi7cglmj3hb9iuc4exeza22pcsc8u90hgn3znj5mrz14b17a5cz16ge7i7mdiiwa4ihkoe20n6zzqeljueoqs1b5c55osuy3fubaed18rg8abth8gewa6es2gcv7cnewaglixho00wpui9tukdxerthqv5gz7m3svtm5kcxsvx6ehmslt57lzj1bykvs1wyig13cjgxodzhcktvrdq6ptt2a14q53tj1puunoz1wnhkzisn6cjjph396kf9cqwwje0jotrttdw6iww9pnrdtwhcduwh2al41nieiz4ea7d824tcv60tme7jjdnq5dunoqg997cf1kqmlqwyvk9i4jckq5oz2wrnmtojtssomtlpisym28syudk5n5igfd4q28h8bo6ptqvlj7zrq0hcah83otwxi2ne5bcjo0o3yg3ytprrfn7hxufd7q019v2v9xyzg4aifgoum463e9ad4ne4v3cuhdwkbudq7tw244no2aj4ermy2pkc3u8l4w50hwxoac1ghu11b2qnbjn9fesebavj04qtp1znvy1jenlh480zewbt0oy7ayw648zdo9ofcor0u8k22y51eqoutpzrhzq58znlkar2avb1d9okp961aql9fi660z3b1gkm2e86gckoewbzxg9jwf7qvln8q4iijdbmh70hv2drhrdqgwanlrfh6kawrhbx0rpqvwp9igtg9bv1yej2igtrsmj6wjzeec5jrcqji190cek1qs0e4o9uyx4mozwvmmtrn41vv1rtlmjvhhyegiwmxbqxiegzyvg8r08yfvleid6e9mstxd71x7u7rc281isaqacbjqus5ybcsfucg1etb27ejd9orple03fozh170nwr8wy6wxgf6ykm6f0ollz0wj0dn8d2im5aeoeee78hkfbkrtwhl0iuaa1hbgm56omvzv6stsexqly3ou4am0mvv8my2oprzlen2itgeg90z04s2w9lh2399qulxok1zvbqr5frf2qefcq5natcjew4adud81migucwy858l2ctug0mn007iarbay9fhtw35rsmptob9xg8fqr76vlbuvcm0ru61ai9f0248stkruec03gelq1knpop1ioeb62bbxfiux5sz55g6lzrb3j2gc658bvsnh5w66m3lv0rg5q8bwt0ob70p0hdapwicsr1d37xfrrt9ggx3luq0vuiwm9heyrp1ymx6x128uyjdj4vx51zqxivw2mlqgtjrve1cuyzzdhkdaztw489um8gxyzfscvolztjipm4uh0a8l9dj35t7pcrf2iivofclmlxf13zdw81cj3df0sktzskatiis4xfewaapbjprdxjm1tsngd34z515ajnrqxklmtj15lastmjff7657jq5aws3muaezebgxlb6mcfel7yn8lc51cp1t689bka6y74qeek8tmzqkfau25twoujpau9ae7day2cmapgz1k0sd21emgpb4a3bcre7seu00nh66hwam3djd3a0h61jodw1pcsu05zjdwzkds6g3fjnqs1z0lu2ja96garabehs8m6qkk5iwi024hhxuq2wehelafs6pmaarx5ryg8mlz69gm6v0v2a467w5dar7zq09fdr6i9fzdvly3c6owbnjwzfelvwvosy7otsbx2zbt69wh0dn2hf5gqtly23b5svndgok1ahzey09kvkjw62omis65ph00agsh99tpca3kr22gk0aln2j0ilzejxcfhko73wveqfv9ysd8nmateqhhffraj03zw68jldsuayvsqp01fsfydw41gl19x0mw7d82hu1cv5cvzqjgp8arx4td9bftsjl3uo2kqa53a7aej74zqdnyx2qfkf70azuukftwex82is9vr25ccfxm41v63qlfheid906bww05rxmjf3y38auq1s1zgnenq2hbelyny74w9cc3qzckqjn9as8q1xyn924s1x82kkaydwmth6c6nhterihizpl8eh1dtneypxx8i72sy5n2u0ysxgakxxtx6bbtz79nf31ff4aelh0xdlwfk2wjjdj8wj42azrv6f72n7eube5ncqg7u2t8vn8gq3hfz0uw1bmqi8eto3c2tuzhjxp4f7vedlpb3fq0r4kn7avtutjd875m1ujbvhkjkjha0y9xym4w6zc1l88jpkcadgs8tlqp23id304dze1gk5s2q99n0u2ixfwmpl8q09fgtr88gjpk5z5c12z90w7zxn91x4k9fpceq88qhb0ah7u8amf0dhb0xs1tw49n7bra6glh8hkpma29ibsqtoiq3nfbteyv8414ittc5urhsg9kaw3jkcmuc0qgwv3w6ixyh2weo50fmtj1xtlhhi58a64vt842yapkdqbn7x3r0imddhthny8safx5a1o0s7hhpezqf1a1x5507zu9gajko222yu99q9h8p9xl6bm6cfonj3inkijldott662hlyob0jz9toked3ddswye4y880h5xtruas6vsc2nsmnss3vn47qj4s 00:10:56.611 11:21:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:10:56.611 11:21:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:10:56.611 11:21:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:10:56.611 11:21:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:56.611 [2024-10-07 11:21:52.132309] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:56.611 [2024-10-07 11:21:52.132534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60103 ] 00:10:56.868 { 00:10:56.868 "subsystems": [ 00:10:56.868 { 00:10:56.868 "subsystem": "bdev", 00:10:56.868 "config": [ 00:10:56.868 { 00:10:56.868 "params": { 00:10:56.868 "trtype": "pcie", 00:10:56.868 "traddr": "0000:00:10.0", 00:10:56.868 "name": "Nvme0" 00:10:56.868 }, 00:10:56.868 "method": "bdev_nvme_attach_controller" 00:10:56.868 }, 00:10:56.868 { 00:10:56.868 "method": "bdev_wait_for_examine" 00:10:56.868 } 00:10:56.868 ] 00:10:56.868 } 00:10:56.868 ] 00:10:56.868 } 00:10:56.868 [2024-10-07 11:21:52.265793] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.868 [2024-10-07 11:21:52.390194] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.126 [2024-10-07 11:21:52.446566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:57.126  [2024-10-07T11:21:52.906Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:10:57.383 00:10:57.383 11:21:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:10:57.383 11:21:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:10:57.383 11:21:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:10:57.383 11:21:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:57.383 [2024-10-07 11:21:52.854424] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:57.383 [2024-10-07 11:21:52.854566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60122 ] 00:10:57.383 { 00:10:57.383 "subsystems": [ 00:10:57.383 { 00:10:57.383 "subsystem": "bdev", 00:10:57.383 "config": [ 00:10:57.383 { 00:10:57.383 "params": { 00:10:57.383 "trtype": "pcie", 00:10:57.383 "traddr": "0000:00:10.0", 00:10:57.383 "name": "Nvme0" 00:10:57.383 }, 00:10:57.383 "method": "bdev_nvme_attach_controller" 00:10:57.383 }, 00:10:57.383 { 00:10:57.383 "method": "bdev_wait_for_examine" 00:10:57.383 } 00:10:57.383 ] 00:10:57.383 } 00:10:57.383 ] 00:10:57.383 } 00:10:57.641 [2024-10-07 11:21:52.992078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.641 [2024-10-07 11:21:53.095208] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.641 [2024-10-07 11:21:53.153378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:57.899  [2024-10-07T11:21:53.681Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:10:58.158 00:10:58.158 11:21:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:10:58.158 ************************************ 00:10:58.158 END TEST dd_rw_offset 00:10:58.158 ************************************ 00:10:58.158 11:21:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ v6lwkt0gk9xt88nwseftsh5ukirxqoojd98ta1z1j9a2ansrrly2osvu5kuuh3t81bjcs94y6h1gpoq9pjv77cjqp8l7sd1zlq8jtbhq76lnp08798wt084kfsd7x3txj34d3amyumx02xamh4hqpap08etwmxh42xrc4gufxiti5csh6gvaw2lqb2hld5hfnzfvovj06aesyi2usafrmtbougih26yiaopiaeqwwvrc8uvg1rmng79xfxjmelez60pekze2csreeeqhee180c8fwp0aa9p2q9i24yr5z36wva3rqhiom7y38sqaf6fjm2phgu8xcqifhp3irb7ycz89qidhcvbj3uh4kbeajfr8r6g3f0luzfni9o3cgg2cqlp446ujljcgxwdky8xukr2s02rx25rv0fr5v617bem9hyb4o2jx3kttm0m2upswmrj9yzqo268d6ct73uaojorpb83wufhh73qxx6so8nwf70ybn7yulgfp1qq9915qt4qbk9n5mrcscutr4b6yq22wqg1btgcstoib3l53hq4230u3xxywedz4kmux4nx02reazcgkk7rog0gnd44x3byvw8nv9ybws5cseck4wq6ealbpv6lo3m783nmqihx1a6ydjxx9cygx57yc9yz3tdlzyv3l6vgts3ikixw9bzdifrce3fol36dro52p0zl1kip9z6u4fwvej62efkmc1mtztrhzymwz2dd1gukos2sc0oecqnmalzc1blttnphuagbtq1xx4ibjwyrex341sckysyeeqfkjj3w6l21norkqas2365fk0fiyezfdvysei32303coysfke7zt19mafis87wa4l2o8liduj2vlag18uhtatxbd63pixekxudsssn6tn7hu058tv9eypcvkpx7wqokicyr47vzhfdly0yz06py0313x17hu8qkt7f4h33rbvf0cq0jcznsugy75azvm2wc8gzblxrlynca42e3fx6go3xbzojokdergh6wy475wsqnri4je1qxn894074fi5hfwgqw1obu587yeojgp9lcjx6ykyksxpigv1mexj55i9gkbqvfn9twexcegkra5aqdut8caqo79r8v0eanod8t0e0ba6zrxs7ouua5a53ut5yan8o6h8651p5lt7augur0gjp51zffg9e2ue6fqgl55wfksy2oglpv33ggxgl54lqbdvehlg6bwe2ombkja0npxuva3iunm482ony8ksaqoklw1gi335a4jb746cl8b19hryp654hkg6kf8mbr4qsop1rliqc6drut464hcqcqreuqo4b0ll6wdvwz0y39e1jg0tq3ydwh3do9nkdt9t3he45r29spb5lh4h547clboiu1f6y6lh4t70lug59mzowzue6g03szopqmzetjioj5cumx60i5iefyysja3y012e88y6cecc5uhcdl9rev09p6vflupjv1iul9jnv2ovjn5nr8fekcsikns7esxsfylndflcd8933ldrw8u3ylelgtq4joff5zi0qi7cglmj3hb9iuc4exeza22pcsc8u90hgn3znj5mrz14b17a5cz16ge7i7mdiiwa4ihkoe20n6zzqeljueoqs1b5c55osuy3fubaed18rg8abth8gewa6es2gcv7cnewaglixho00wpui9tukdxerthqv5gz7m3svtm5kcxsvx6ehmslt57lzj1bykvs1wyig13cjgxodzhcktvrdq6ptt2a14q53tj1puunoz1wnhkzisn6cjjph396kf9cqwwje0jotrttdw6iww9pnrdtwhcduwh2al41nieiz4ea7d824tcv60tme7jjdnq5dunoqg997cf1kqmlqwyvk9i4jckq5oz2wrnmtojtssomtlpisym28syudk5n5igfd4q28h8bo6ptqvlj7zrq0hcah83otwxi2ne5bcjo0o3yg3ytprrfn7hxufd7q019v2v9xyzg4aifgoum463e9ad4ne4v3cuhdwkbudq7tw244no2aj4ermy2pkc3u8l4w50hwxoac1ghu11b2qnbjn9fesebavj04qtp1znvy1jenlh480zewbt0oy7ayw648zdo9ofcor0u8k22y51eqoutpzrhzq58znlkar2avb1d9okp961aql9fi660z3b1gkm2e86gckoewbzxg9jwf7qvln8q4iijdbmh70hv2drhrdqgwanlrfh6kawrhbx0rpqvwp9igtg9bv1yej2igtrsmj6wjzeec5jrcqji190cek1qs0e4o9uyx4mozwvmmtrn41vv1rtlmjvhhyegiwmxbqxiegzyvg8r08yfvleid6e9mstxd71x7u7rc281isaqacbjqus5ybcsfucg1etb27ejd9orple03fozh170nwr8wy6wxgf6ykm6f0ollz0wj0dn8d2im5aeoeee78hkfbkrtwhl0iuaa1hbgm56omvzv6stsexqly3ou4am0mvv8my2oprzlen2itgeg90z04s2w9lh2399qulxok1zvbqr5frf2qefcq5natcjew4adud81migucwy858l2ctug0mn007iarbay9fhtw35rsmptob9xg8fqr76vlbuvcm0ru61ai9f0248stkruec03gelq1knpop1ioeb62bbxfiux5sz55g6lzrb3j2gc658bvsnh5w66m3lv0rg5q8bwt0ob70p0hdapwicsr1d37xfrrt9ggx3luq0vuiwm9heyrp1ymx6x128uyjdj4vx51zqxivw2mlqgtjrve1cuyzzdhkdaztw489um8gxyzfscvolztjipm4uh0a8l9dj35t7pcrf2iivofclmlxf13zdw81cj3df0sktzskatiis4xfewaapbjprdxjm1tsngd34z515ajnrqxklmtj15lastmjff7657jq5aws3muaezebgxlb6mcfel7yn8lc51cp1t689bka6y74qeek8tmzqkfau25twoujpau9ae7day2cmapgz1k0sd21emgpb4a3bcre7seu00nh66hwam3djd3a0h61jodw1pcsu05zjdwzkds6g3fjnqs1z0lu2ja96garabehs8m6qkk5iwi024hhxuq2wehelafs6pmaarx5ryg8mlz69gm6v0v2a467w5dar7zq09fdr6i9fzdvly3c6owbnjwzfelvwvosy7otsbx2zbt69wh0dn2hf5gqtly23b5svndgok1ahzey09kvkjw62omis65ph00agsh99tpca3kr22gk0aln2j0ilzejxcfhko73wveqfv9ysd8nmateqhhffraj03zw68jldsuayvsqp01fsfydw41gl19x0mw7d82hu1cv5cvzqjgp8arx4td9bftsjl3uo2kqa53a7aej74zqdnyx2qfkf70azuukftwex82is9vr25ccfxm41v63qlfheid906bww05rxmjf3y38auq1s1zgnenq2hbelyny74w9cc3qzckqjn9as8q1xyn924s1x82kkaydwmth6c6nhterihizpl8eh1dtneypxx8i72sy5n2u0ysxgakxxtx6bbtz79nf31ff4aelh0xdlwfk2wjjdj8wj42azrv6f72n7eube5ncqg7u2t8vn8gq3hfz0uw1bmqi8eto3c2tuzhjxp4f7vedlpb3fq0r4kn7avtutjd875m1ujbvhkjkjha0y9xym4w6zc1l88jpkcadgs8tlqp23id304dze1gk5s2q99n0u2ixfwmpl8q09fgtr88gjpk5z5c12z90w7zxn91x4k9fpceq88qhb0ah7u8amf0dhb0xs1tw49n7bra6glh8hkpma29ibsqtoiq3nfbteyv8414ittc5urhsg9kaw3jkcmuc0qgwv3w6ixyh2weo50fmtj1xtlhhi58a64vt842yapkdqbn7x3r0imddhthny8safx5a1o0s7hhpezqf1a1x5507zu9gajko222yu99q9h8p9xl6bm6cfonj3inkijldott662hlyob0jz9toked3ddswye4y880h5xtruas6vsc2nsmnss3vn47qj4s == \v\6\l\w\k\t\0\g\k\9\x\t\8\8\n\w\s\e\f\t\s\h\5\u\k\i\r\x\q\o\o\j\d\9\8\t\a\1\z\1\j\9\a\2\a\n\s\r\r\l\y\2\o\s\v\u\5\k\u\u\h\3\t\8\1\b\j\c\s\9\4\y\6\h\1\g\p\o\q\9\p\j\v\7\7\c\j\q\p\8\l\7\s\d\1\z\l\q\8\j\t\b\h\q\7\6\l\n\p\0\8\7\9\8\w\t\0\8\4\k\f\s\d\7\x\3\t\x\j\3\4\d\3\a\m\y\u\m\x\0\2\x\a\m\h\4\h\q\p\a\p\0\8\e\t\w\m\x\h\4\2\x\r\c\4\g\u\f\x\i\t\i\5\c\s\h\6\g\v\a\w\2\l\q\b\2\h\l\d\5\h\f\n\z\f\v\o\v\j\0\6\a\e\s\y\i\2\u\s\a\f\r\m\t\b\o\u\g\i\h\2\6\y\i\a\o\p\i\a\e\q\w\w\v\r\c\8\u\v\g\1\r\m\n\g\7\9\x\f\x\j\m\e\l\e\z\6\0\p\e\k\z\e\2\c\s\r\e\e\e\q\h\e\e\1\8\0\c\8\f\w\p\0\a\a\9\p\2\q\9\i\2\4\y\r\5\z\3\6\w\v\a\3\r\q\h\i\o\m\7\y\3\8\s\q\a\f\6\f\j\m\2\p\h\g\u\8\x\c\q\i\f\h\p\3\i\r\b\7\y\c\z\8\9\q\i\d\h\c\v\b\j\3\u\h\4\k\b\e\a\j\f\r\8\r\6\g\3\f\0\l\u\z\f\n\i\9\o\3\c\g\g\2\c\q\l\p\4\4\6\u\j\l\j\c\g\x\w\d\k\y\8\x\u\k\r\2\s\0\2\r\x\2\5\r\v\0\f\r\5\v\6\1\7\b\e\m\9\h\y\b\4\o\2\j\x\3\k\t\t\m\0\m\2\u\p\s\w\m\r\j\9\y\z\q\o\2\6\8\d\6\c\t\7\3\u\a\o\j\o\r\p\b\8\3\w\u\f\h\h\7\3\q\x\x\6\s\o\8\n\w\f\7\0\y\b\n\7\y\u\l\g\f\p\1\q\q\9\9\1\5\q\t\4\q\b\k\9\n\5\m\r\c\s\c\u\t\r\4\b\6\y\q\2\2\w\q\g\1\b\t\g\c\s\t\o\i\b\3\l\5\3\h\q\4\2\3\0\u\3\x\x\y\w\e\d\z\4\k\m\u\x\4\n\x\0\2\r\e\a\z\c\g\k\k\7\r\o\g\0\g\n\d\4\4\x\3\b\y\v\w\8\n\v\9\y\b\w\s\5\c\s\e\c\k\4\w\q\6\e\a\l\b\p\v\6\l\o\3\m\7\8\3\n\m\q\i\h\x\1\a\6\y\d\j\x\x\9\c\y\g\x\5\7\y\c\9\y\z\3\t\d\l\z\y\v\3\l\6\v\g\t\s\3\i\k\i\x\w\9\b\z\d\i\f\r\c\e\3\f\o\l\3\6\d\r\o\5\2\p\0\z\l\1\k\i\p\9\z\6\u\4\f\w\v\e\j\6\2\e\f\k\m\c\1\m\t\z\t\r\h\z\y\m\w\z\2\d\d\1\g\u\k\o\s\2\s\c\0\o\e\c\q\n\m\a\l\z\c\1\b\l\t\t\n\p\h\u\a\g\b\t\q\1\x\x\4\i\b\j\w\y\r\e\x\3\4\1\s\c\k\y\s\y\e\e\q\f\k\j\j\3\w\6\l\2\1\n\o\r\k\q\a\s\2\3\6\5\f\k\0\f\i\y\e\z\f\d\v\y\s\e\i\3\2\3\0\3\c\o\y\s\f\k\e\7\z\t\1\9\m\a\f\i\s\8\7\w\a\4\l\2\o\8\l\i\d\u\j\2\v\l\a\g\1\8\u\h\t\a\t\x\b\d\6\3\p\i\x\e\k\x\u\d\s\s\s\n\6\t\n\7\h\u\0\5\8\t\v\9\e\y\p\c\v\k\p\x\7\w\q\o\k\i\c\y\r\4\7\v\z\h\f\d\l\y\0\y\z\0\6\p\y\0\3\1\3\x\1\7\h\u\8\q\k\t\7\f\4\h\3\3\r\b\v\f\0\c\q\0\j\c\z\n\s\u\g\y\7\5\a\z\v\m\2\w\c\8\g\z\b\l\x\r\l\y\n\c\a\4\2\e\3\f\x\6\g\o\3\x\b\z\o\j\o\k\d\e\r\g\h\6\w\y\4\7\5\w\s\q\n\r\i\4\j\e\1\q\x\n\8\9\4\0\7\4\f\i\5\h\f\w\g\q\w\1\o\b\u\5\8\7\y\e\o\j\g\p\9\l\c\j\x\6\y\k\y\k\s\x\p\i\g\v\1\m\e\x\j\5\5\i\9\g\k\b\q\v\f\n\9\t\w\e\x\c\e\g\k\r\a\5\a\q\d\u\t\8\c\a\q\o\7\9\r\8\v\0\e\a\n\o\d\8\t\0\e\0\b\a\6\z\r\x\s\7\o\u\u\a\5\a\5\3\u\t\5\y\a\n\8\o\6\h\8\6\5\1\p\5\l\t\7\a\u\g\u\r\0\g\j\p\5\1\z\f\f\g\9\e\2\u\e\6\f\q\g\l\5\5\w\f\k\s\y\2\o\g\l\p\v\3\3\g\g\x\g\l\5\4\l\q\b\d\v\e\h\l\g\6\b\w\e\2\o\m\b\k\j\a\0\n\p\x\u\v\a\3\i\u\n\m\4\8\2\o\n\y\8\k\s\a\q\o\k\l\w\1\g\i\3\3\5\a\4\j\b\7\4\6\c\l\8\b\1\9\h\r\y\p\6\5\4\h\k\g\6\k\f\8\m\b\r\4\q\s\o\p\1\r\l\i\q\c\6\d\r\u\t\4\6\4\h\c\q\c\q\r\e\u\q\o\4\b\0\l\l\6\w\d\v\w\z\0\y\3\9\e\1\j\g\0\t\q\3\y\d\w\h\3\d\o\9\n\k\d\t\9\t\3\h\e\4\5\r\2\9\s\p\b\5\l\h\4\h\5\4\7\c\l\b\o\i\u\1\f\6\y\6\l\h\4\t\7\0\l\u\g\5\9\m\z\o\w\z\u\e\6\g\0\3\s\z\o\p\q\m\z\e\t\j\i\o\j\5\c\u\m\x\6\0\i\5\i\e\f\y\y\s\j\a\3\y\0\1\2\e\8\8\y\6\c\e\c\c\5\u\h\c\d\l\9\r\e\v\0\9\p\6\v\f\l\u\p\j\v\1\i\u\l\9\j\n\v\2\o\v\j\n\5\n\r\8\f\e\k\c\s\i\k\n\s\7\e\s\x\s\f\y\l\n\d\f\l\c\d\8\9\3\3\l\d\r\w\8\u\3\y\l\e\l\g\t\q\4\j\o\f\f\5\z\i\0\q\i\7\c\g\l\m\j\3\h\b\9\i\u\c\4\e\x\e\z\a\2\2\p\c\s\c\8\u\9\0\h\g\n\3\z\n\j\5\m\r\z\1\4\b\1\7\a\5\c\z\1\6\g\e\7\i\7\m\d\i\i\w\a\4\i\h\k\o\e\2\0\n\6\z\z\q\e\l\j\u\e\o\q\s\1\b\5\c\5\5\o\s\u\y\3\f\u\b\a\e\d\1\8\r\g\8\a\b\t\h\8\g\e\w\a\6\e\s\2\g\c\v\7\c\n\e\w\a\g\l\i\x\h\o\0\0\w\p\u\i\9\t\u\k\d\x\e\r\t\h\q\v\5\g\z\7\m\3\s\v\t\m\5\k\c\x\s\v\x\6\e\h\m\s\l\t\5\7\l\z\j\1\b\y\k\v\s\1\w\y\i\g\1\3\c\j\g\x\o\d\z\h\c\k\t\v\r\d\q\6\p\t\t\2\a\1\4\q\5\3\t\j\1\p\u\u\n\o\z\1\w\n\h\k\z\i\s\n\6\c\j\j\p\h\3\9\6\k\f\9\c\q\w\w\j\e\0\j\o\t\r\t\t\d\w\6\i\w\w\9\p\n\r\d\t\w\h\c\d\u\w\h\2\a\l\4\1\n\i\e\i\z\4\e\a\7\d\8\2\4\t\c\v\6\0\t\m\e\7\j\j\d\n\q\5\d\u\n\o\q\g\9\9\7\c\f\1\k\q\m\l\q\w\y\v\k\9\i\4\j\c\k\q\5\o\z\2\w\r\n\m\t\o\j\t\s\s\o\m\t\l\p\i\s\y\m\2\8\s\y\u\d\k\5\n\5\i\g\f\d\4\q\2\8\h\8\b\o\6\p\t\q\v\l\j\7\z\r\q\0\h\c\a\h\8\3\o\t\w\x\i\2\n\e\5\b\c\j\o\0\o\3\y\g\3\y\t\p\r\r\f\n\7\h\x\u\f\d\7\q\0\1\9\v\2\v\9\x\y\z\g\4\a\i\f\g\o\u\m\4\6\3\e\9\a\d\4\n\e\4\v\3\c\u\h\d\w\k\b\u\d\q\7\t\w\2\4\4\n\o\2\a\j\4\e\r\m\y\2\p\k\c\3\u\8\l\4\w\5\0\h\w\x\o\a\c\1\g\h\u\1\1\b\2\q\n\b\j\n\9\f\e\s\e\b\a\v\j\0\4\q\t\p\1\z\n\v\y\1\j\e\n\l\h\4\8\0\z\e\w\b\t\0\o\y\7\a\y\w\6\4\8\z\d\o\9\o\f\c\o\r\0\u\8\k\2\2\y\5\1\e\q\o\u\t\p\z\r\h\z\q\5\8\z\n\l\k\a\r\2\a\v\b\1\d\9\o\k\p\9\6\1\a\q\l\9\f\i\6\6\0\z\3\b\1\g\k\m\2\e\8\6\g\c\k\o\e\w\b\z\x\g\9\j\w\f\7\q\v\l\n\8\q\4\i\i\j\d\b\m\h\7\0\h\v\2\d\r\h\r\d\q\g\w\a\n\l\r\f\h\6\k\a\w\r\h\b\x\0\r\p\q\v\w\p\9\i\g\t\g\9\b\v\1\y\e\j\2\i\g\t\r\s\m\j\6\w\j\z\e\e\c\5\j\r\c\q\j\i\1\9\0\c\e\k\1\q\s\0\e\4\o\9\u\y\x\4\m\o\z\w\v\m\m\t\r\n\4\1\v\v\1\r\t\l\m\j\v\h\h\y\e\g\i\w\m\x\b\q\x\i\e\g\z\y\v\g\8\r\0\8\y\f\v\l\e\i\d\6\e\9\m\s\t\x\d\7\1\x\7\u\7\r\c\2\8\1\i\s\a\q\a\c\b\j\q\u\s\5\y\b\c\s\f\u\c\g\1\e\t\b\2\7\e\j\d\9\o\r\p\l\e\0\3\f\o\z\h\1\7\0\n\w\r\8\w\y\6\w\x\g\f\6\y\k\m\6\f\0\o\l\l\z\0\w\j\0\d\n\8\d\2\i\m\5\a\e\o\e\e\e\7\8\h\k\f\b\k\r\t\w\h\l\0\i\u\a\a\1\h\b\g\m\5\6\o\m\v\z\v\6\s\t\s\e\x\q\l\y\3\o\u\4\a\m\0\m\v\v\8\m\y\2\o\p\r\z\l\e\n\2\i\t\g\e\g\9\0\z\0\4\s\2\w\9\l\h\2\3\9\9\q\u\l\x\o\k\1\z\v\b\q\r\5\f\r\f\2\q\e\f\c\q\5\n\a\t\c\j\e\w\4\a\d\u\d\8\1\m\i\g\u\c\w\y\8\5\8\l\2\c\t\u\g\0\m\n\0\0\7\i\a\r\b\a\y\9\f\h\t\w\3\5\r\s\m\p\t\o\b\9\x\g\8\f\q\r\7\6\v\l\b\u\v\c\m\0\r\u\6\1\a\i\9\f\0\2\4\8\s\t\k\r\u\e\c\0\3\g\e\l\q\1\k\n\p\o\p\1\i\o\e\b\6\2\b\b\x\f\i\u\x\5\s\z\5\5\g\6\l\z\r\b\3\j\2\g\c\6\5\8\b\v\s\n\h\5\w\6\6\m\3\l\v\0\r\g\5\q\8\b\w\t\0\o\b\7\0\p\0\h\d\a\p\w\i\c\s\r\1\d\3\7\x\f\r\r\t\9\g\g\x\3\l\u\q\0\v\u\i\w\m\9\h\e\y\r\p\1\y\m\x\6\x\1\2\8\u\y\j\d\j\4\v\x\5\1\z\q\x\i\v\w\2\m\l\q\g\t\j\r\v\e\1\c\u\y\z\z\d\h\k\d\a\z\t\w\4\8\9\u\m\8\g\x\y\z\f\s\c\v\o\l\z\t\j\i\p\m\4\u\h\0\a\8\l\9\d\j\3\5\t\7\p\c\r\f\2\i\i\v\o\f\c\l\m\l\x\f\1\3\z\d\w\8\1\c\j\3\d\f\0\s\k\t\z\s\k\a\t\i\i\s\4\x\f\e\w\a\a\p\b\j\p\r\d\x\j\m\1\t\s\n\g\d\3\4\z\5\1\5\a\j\n\r\q\x\k\l\m\t\j\1\5\l\a\s\t\m\j\f\f\7\6\5\7\j\q\5\a\w\s\3\m\u\a\e\z\e\b\g\x\l\b\6\m\c\f\e\l\7\y\n\8\l\c\5\1\c\p\1\t\6\8\9\b\k\a\6\y\7\4\q\e\e\k\8\t\m\z\q\k\f\a\u\2\5\t\w\o\u\j\p\a\u\9\a\e\7\d\a\y\2\c\m\a\p\g\z\1\k\0\s\d\2\1\e\m\g\p\b\4\a\3\b\c\r\e\7\s\e\u\0\0\n\h\6\6\h\w\a\m\3\d\j\d\3\a\0\h\6\1\j\o\d\w\1\p\c\s\u\0\5\z\j\d\w\z\k\d\s\6\g\3\f\j\n\q\s\1\z\0\l\u\2\j\a\9\6\g\a\r\a\b\e\h\s\8\m\6\q\k\k\5\i\w\i\0\2\4\h\h\x\u\q\2\w\e\h\e\l\a\f\s\6\p\m\a\a\r\x\5\r\y\g\8\m\l\z\6\9\g\m\6\v\0\v\2\a\4\6\7\w\5\d\a\r\7\z\q\0\9\f\d\r\6\i\9\f\z\d\v\l\y\3\c\6\o\w\b\n\j\w\z\f\e\l\v\w\v\o\s\y\7\o\t\s\b\x\2\z\b\t\6\9\w\h\0\d\n\2\h\f\5\g\q\t\l\y\2\3\b\5\s\v\n\d\g\o\k\1\a\h\z\e\y\0\9\k\v\k\j\w\6\2\o\m\i\s\6\5\p\h\0\0\a\g\s\h\9\9\t\p\c\a\3\k\r\2\2\g\k\0\a\l\n\2\j\0\i\l\z\e\j\x\c\f\h\k\o\7\3\w\v\e\q\f\v\9\y\s\d\8\n\m\a\t\e\q\h\h\f\f\r\a\j\0\3\z\w\6\8\j\l\d\s\u\a\y\v\s\q\p\0\1\f\s\f\y\d\w\4\1\g\l\1\9\x\0\m\w\7\d\8\2\h\u\1\c\v\5\c\v\z\q\j\g\p\8\a\r\x\4\t\d\9\b\f\t\s\j\l\3\u\o\2\k\q\a\5\3\a\7\a\e\j\7\4\z\q\d\n\y\x\2\q\f\k\f\7\0\a\z\u\u\k\f\t\w\e\x\8\2\i\s\9\v\r\2\5\c\c\f\x\m\4\1\v\6\3\q\l\f\h\e\i\d\9\0\6\b\w\w\0\5\r\x\m\j\f\3\y\3\8\a\u\q\1\s\1\z\g\n\e\n\q\2\h\b\e\l\y\n\y\7\4\w\9\c\c\3\q\z\c\k\q\j\n\9\a\s\8\q\1\x\y\n\9\2\4\s\1\x\8\2\k\k\a\y\d\w\m\t\h\6\c\6\n\h\t\e\r\i\h\i\z\p\l\8\e\h\1\d\t\n\e\y\p\x\x\8\i\7\2\s\y\5\n\2\u\0\y\s\x\g\a\k\x\x\t\x\6\b\b\t\z\7\9\n\f\3\1\f\f\4\a\e\l\h\0\x\d\l\w\f\k\2\w\j\j\d\j\8\w\j\4\2\a\z\r\v\6\f\7\2\n\7\e\u\b\e\5\n\c\q\g\7\u\2\t\8\v\n\8\g\q\3\h\f\z\0\u\w\1\b\m\q\i\8\e\t\o\3\c\2\t\u\z\h\j\x\p\4\f\7\v\e\d\l\p\b\3\f\q\0\r\4\k\n\7\a\v\t\u\t\j\d\8\7\5\m\1\u\j\b\v\h\k\j\k\j\h\a\0\y\9\x\y\m\4\w\6\z\c\1\l\8\8\j\p\k\c\a\d\g\s\8\t\l\q\p\2\3\i\d\3\0\4\d\z\e\1\g\k\5\s\2\q\9\9\n\0\u\2\i\x\f\w\m\p\l\8\q\0\9\f\g\t\r\8\8\g\j\p\k\5\z\5\c\1\2\z\9\0\w\7\z\x\n\9\1\x\4\k\9\f\p\c\e\q\8\8\q\h\b\0\a\h\7\u\8\a\m\f\0\d\h\b\0\x\s\1\t\w\4\9\n\7\b\r\a\6\g\l\h\8\h\k\p\m\a\2\9\i\b\s\q\t\o\i\q\3\n\f\b\t\e\y\v\8\4\1\4\i\t\t\c\5\u\r\h\s\g\9\k\a\w\3\j\k\c\m\u\c\0\q\g\w\v\3\w\6\i\x\y\h\2\w\e\o\5\0\f\m\t\j\1\x\t\l\h\h\i\5\8\a\6\4\v\t\8\4\2\y\a\p\k\d\q\b\n\7\x\3\r\0\i\m\d\d\h\t\h\n\y\8\s\a\f\x\5\a\1\o\0\s\7\h\h\p\e\z\q\f\1\a\1\x\5\5\0\7\z\u\9\g\a\j\k\o\2\2\2\y\u\9\9\q\9\h\8\p\9\x\l\6\b\m\6\c\f\o\n\j\3\i\n\k\i\j\l\d\o\t\t\6\6\2\h\l\y\o\b\0\j\z\9\t\o\k\e\d\3\d\d\s\w\y\e\4\y\8\8\0\h\5\x\t\r\u\a\s\6\v\s\c\2\n\s\m\n\s\s\3\v\n\4\7\q\j\4\s ]] 00:10:58.158 00:10:58.158 real 0m1.472s 00:10:58.158 user 0m1.022s 00:10:58.158 sys 0m0.638s 00:10:58.158 11:21:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.158 11:21:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:58.158 11:21:53 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:10:58.158 11:21:53 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:10:58.158 11:21:53 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:58.158 11:21:53 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:58.158 11:21:53 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:10:58.159 11:21:53 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:58.159 11:21:53 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:10:58.159 11:21:53 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:58.159 11:21:53 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:10:58.159 11:21:53 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:58.159 11:21:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:58.159 { 00:10:58.159 "subsystems": [ 00:10:58.159 { 00:10:58.159 "subsystem": "bdev", 00:10:58.159 "config": [ 00:10:58.159 { 00:10:58.159 "params": { 00:10:58.159 "trtype": "pcie", 00:10:58.159 "traddr": "0000:00:10.0", 00:10:58.159 "name": "Nvme0" 00:10:58.159 }, 00:10:58.159 "method": "bdev_nvme_attach_controller" 00:10:58.159 }, 00:10:58.159 { 00:10:58.159 "method": "bdev_wait_for_examine" 00:10:58.159 } 00:10:58.159 ] 00:10:58.159 } 00:10:58.159 ] 00:10:58.159 } 00:10:58.159 [2024-10-07 11:21:53.608546] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:58.159 [2024-10-07 11:21:53.608662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60151 ] 00:10:58.416 [2024-10-07 11:21:53.748299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.416 [2024-10-07 11:21:53.878441] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.416 [2024-10-07 11:21:53.935586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:58.674  [2024-10-07T11:21:54.455Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:58.932 00:10:58.932 11:21:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:58.932 ************************************ 00:10:58.932 END TEST spdk_dd_basic_rw 00:10:58.932 ************************************ 00:10:58.932 00:10:58.932 real 0m19.706s 00:10:58.932 user 0m14.357s 00:10:58.932 sys 0m6.964s 00:10:58.932 11:21:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.932 11:21:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:58.932 11:21:54 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:10:58.932 11:21:54 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:58.932 11:21:54 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.932 11:21:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:58.932 ************************************ 00:10:58.932 START TEST spdk_dd_posix 00:10:58.932 ************************************ 00:10:58.932 11:21:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:10:58.932 * Looking for test storage... 00:10:58.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:58.932 11:21:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:58.932 11:21:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lcov --version 00:10:58.932 11:21:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:59.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.191 --rc genhtml_branch_coverage=1 00:10:59.191 --rc genhtml_function_coverage=1 00:10:59.191 --rc genhtml_legend=1 00:10:59.191 --rc geninfo_all_blocks=1 00:10:59.191 --rc geninfo_unexecuted_blocks=1 00:10:59.191 00:10:59.191 ' 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:59.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.191 --rc genhtml_branch_coverage=1 00:10:59.191 --rc genhtml_function_coverage=1 00:10:59.191 --rc genhtml_legend=1 00:10:59.191 --rc geninfo_all_blocks=1 00:10:59.191 --rc geninfo_unexecuted_blocks=1 00:10:59.191 00:10:59.191 ' 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:59.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.191 --rc genhtml_branch_coverage=1 00:10:59.191 --rc genhtml_function_coverage=1 00:10:59.191 --rc genhtml_legend=1 00:10:59.191 --rc geninfo_all_blocks=1 00:10:59.191 --rc geninfo_unexecuted_blocks=1 00:10:59.191 00:10:59.191 ' 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:59.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.191 --rc genhtml_branch_coverage=1 00:10:59.191 --rc genhtml_function_coverage=1 00:10:59.191 --rc genhtml_legend=1 00:10:59.191 --rc geninfo_all_blocks=1 00:10:59.191 --rc geninfo_unexecuted_blocks=1 00:10:59.191 00:10:59.191 ' 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:10:59.191 * First test run, liburing in use 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:59.191 ************************************ 00:10:59.191 START TEST dd_flag_append 00:10:59.191 ************************************ 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=hi5y3ygo30s42bu3qj0dma06i2r5cfsh 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:10:59.191 11:21:54 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:10:59.192 11:21:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=6nnhopw3okwbbbuaqmn579kvrlnkbbwl 00:10:59.192 11:21:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s hi5y3ygo30s42bu3qj0dma06i2r5cfsh 00:10:59.192 11:21:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 6nnhopw3okwbbbuaqmn579kvrlnkbbwl 00:10:59.192 11:21:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:10:59.192 [2024-10-07 11:21:54.585289] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:59.192 [2024-10-07 11:21:54.585680] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60223 ] 00:10:59.450 [2024-10-07 11:21:54.722670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.450 [2024-10-07 11:21:54.854026] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.450 [2024-10-07 11:21:54.909928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:59.450  [2024-10-07T11:21:55.231Z] Copying: 32/32 [B] (average 31 kBps) 00:10:59.708 00:10:59.708 ************************************ 00:10:59.708 END TEST dd_flag_append 00:10:59.708 ************************************ 00:10:59.708 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 6nnhopw3okwbbbuaqmn579kvrlnkbbwlhi5y3ygo30s42bu3qj0dma06i2r5cfsh == \6\n\n\h\o\p\w\3\o\k\w\b\b\b\u\a\q\m\n\5\7\9\k\v\r\l\n\k\b\b\w\l\h\i\5\y\3\y\g\o\3\0\s\4\2\b\u\3\q\j\0\d\m\a\0\6\i\2\r\5\c\f\s\h ]] 00:10:59.708 00:10:59.708 real 0m0.669s 00:10:59.708 user 0m0.388s 00:10:59.709 sys 0m0.313s 00:10:59.709 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.709 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:10:59.709 11:21:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:10:59.709 11:21:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:59.709 11:21:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.709 11:21:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:59.967 ************************************ 00:10:59.967 START TEST dd_flag_directory 00:10:59.967 ************************************ 00:10:59.967 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:10:59.967 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:59.967 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:10:59.967 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:59.967 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:59.967 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:59.967 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:59.967 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:59.967 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:59.967 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:59.967 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:59.968 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:59.968 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:59.968 [2024-10-07 11:21:55.297159] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:10:59.968 [2024-10-07 11:21:55.297270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60252 ] 00:10:59.968 [2024-10-07 11:21:55.430485] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.226 [2024-10-07 11:21:55.545653] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.226 [2024-10-07 11:21:55.600959] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:00.226 [2024-10-07 11:21:55.635534] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:00.226 [2024-10-07 11:21:55.635606] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:00.226 [2024-10-07 11:21:55.635619] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:00.484 [2024-10-07 11:21:55.754576] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:00.484 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:11:00.484 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:00.484 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:11:00.484 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:11:00.484 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:11:00.484 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:00.484 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:00.484 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:11:00.484 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:00.484 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:00.484 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:00.484 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:00.484 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:00.484 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:00.484 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:00.484 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:00.484 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:00.484 11:21:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:00.484 [2024-10-07 11:21:55.937239] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:00.484 [2024-10-07 11:21:55.937395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60267 ] 00:11:00.742 [2024-10-07 11:21:56.077051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.742 [2024-10-07 11:21:56.200312] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.742 [2024-10-07 11:21:56.257747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.000 [2024-10-07 11:21:56.293898] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:01.000 [2024-10-07 11:21:56.293973] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:01.000 [2024-10-07 11:21:56.294004] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:01.000 [2024-10-07 11:21:56.411516] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:01.000 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:11:01.000 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:01.000 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:11:01.000 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:11:01.000 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:11:01.000 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:01.000 00:11:01.000 real 0m1.273s 00:11:01.000 user 0m0.762s 00:11:01.000 sys 0m0.300s 00:11:01.000 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.000 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:11:01.000 ************************************ 00:11:01.000 END TEST dd_flag_directory 00:11:01.000 ************************************ 00:11:01.258 11:21:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:11:01.258 11:21:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:01.258 11:21:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.258 11:21:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:01.258 ************************************ 00:11:01.258 START TEST dd_flag_nofollow 00:11:01.259 ************************************ 00:11:01.259 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:11:01.259 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:01.259 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:01.259 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:01.259 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:01.259 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:01.259 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:11:01.259 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:01.259 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.259 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.259 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.259 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.259 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.259 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.259 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.259 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:01.259 11:21:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:01.259 [2024-10-07 11:21:56.640765] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:01.259 [2024-10-07 11:21:56.640878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60295 ] 00:11:01.259 [2024-10-07 11:21:56.777091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.518 [2024-10-07 11:21:56.913417] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.518 [2024-10-07 11:21:56.972452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.518 [2024-10-07 11:21:57.013478] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:01.518 [2024-10-07 11:21:57.013571] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:01.518 [2024-10-07 11:21:57.013599] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:01.776 [2024-10-07 11:21:57.139391] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:01.776 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:11:01.776 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:01.776 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:11:01.776 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:11:01.776 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:11:01.776 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:01.776 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:01.776 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:11:01.776 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:01.776 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.776 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.776 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.776 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.776 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.776 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.776 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:01.776 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:01.776 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:01.776 [2024-10-07 11:21:57.296410] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:01.776 [2024-10-07 11:21:57.296541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60305 ] 00:11:02.034 [2024-10-07 11:21:57.431032] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.034 [2024-10-07 11:21:57.542100] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.291 [2024-10-07 11:21:57.599686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:02.291 [2024-10-07 11:21:57.636002] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:02.291 [2024-10-07 11:21:57.636079] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:02.291 [2024-10-07 11:21:57.636120] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:02.291 [2024-10-07 11:21:57.757339] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:02.550 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:11:02.550 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:02.550 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:11:02.550 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:11:02.550 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:11:02.550 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:02.550 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:11:02.550 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:11:02.550 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:11:02.550 11:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:02.550 [2024-10-07 11:21:57.926705] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:02.550 [2024-10-07 11:21:57.926827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60318 ] 00:11:02.550 [2024-10-07 11:21:58.063881] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.809 [2024-10-07 11:21:58.181258] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.809 [2024-10-07 11:21:58.241136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:02.809  [2024-10-07T11:21:58.589Z] Copying: 512/512 [B] (average 500 kBps) 00:11:03.066 00:11:03.066 11:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 1h04fybhy92u3phthfvum1mhhcsxfcgily8asawm2tb3lipg4ssk0pc277eqf2mpyy3j15gcdv4hzqkwwwp7gnj6werfyyll65nw84mp9uzil81a3b3gorog465u5sg55o09skq7t53idbebu1exg58xeqei8039e8wwjn6l8jqj07hozb9g03jois3akk4amfpd7p2v3l2375dpzseq4674eeuq1nitsxqoxrmdle8vg5p2dd58f9avh60esdme1oaxgdmol9r46dx80ru7zz3bnn0pleuatdvg9xvitbiqerhixucdw7qob98qsecvhw2dqpqnx84jfrqemeab9wune1tfhspn4o9lgpi2lrs9bj2sajwk6euz9wt7fx0zhm8gh5ifh6th0ge7wq445ec1juhl1jzrksv4465l5q9f3ds6mcklvvkbpyeyt89ivku9g7u9sydpjdvlyjyycu5p5n2tx3rnu6qnhn4uz98aq05ahqb91wg97c3tnrpo == \1\h\0\4\f\y\b\h\y\9\2\u\3\p\h\t\h\f\v\u\m\1\m\h\h\c\s\x\f\c\g\i\l\y\8\a\s\a\w\m\2\t\b\3\l\i\p\g\4\s\s\k\0\p\c\2\7\7\e\q\f\2\m\p\y\y\3\j\1\5\g\c\d\v\4\h\z\q\k\w\w\w\p\7\g\n\j\6\w\e\r\f\y\y\l\l\6\5\n\w\8\4\m\p\9\u\z\i\l\8\1\a\3\b\3\g\o\r\o\g\4\6\5\u\5\s\g\5\5\o\0\9\s\k\q\7\t\5\3\i\d\b\e\b\u\1\e\x\g\5\8\x\e\q\e\i\8\0\3\9\e\8\w\w\j\n\6\l\8\j\q\j\0\7\h\o\z\b\9\g\0\3\j\o\i\s\3\a\k\k\4\a\m\f\p\d\7\p\2\v\3\l\2\3\7\5\d\p\z\s\e\q\4\6\7\4\e\e\u\q\1\n\i\t\s\x\q\o\x\r\m\d\l\e\8\v\g\5\p\2\d\d\5\8\f\9\a\v\h\6\0\e\s\d\m\e\1\o\a\x\g\d\m\o\l\9\r\4\6\d\x\8\0\r\u\7\z\z\3\b\n\n\0\p\l\e\u\a\t\d\v\g\9\x\v\i\t\b\i\q\e\r\h\i\x\u\c\d\w\7\q\o\b\9\8\q\s\e\c\v\h\w\2\d\q\p\q\n\x\8\4\j\f\r\q\e\m\e\a\b\9\w\u\n\e\1\t\f\h\s\p\n\4\o\9\l\g\p\i\2\l\r\s\9\b\j\2\s\a\j\w\k\6\e\u\z\9\w\t\7\f\x\0\z\h\m\8\g\h\5\i\f\h\6\t\h\0\g\e\7\w\q\4\4\5\e\c\1\j\u\h\l\1\j\z\r\k\s\v\4\4\6\5\l\5\q\9\f\3\d\s\6\m\c\k\l\v\v\k\b\p\y\e\y\t\8\9\i\v\k\u\9\g\7\u\9\s\y\d\p\j\d\v\l\y\j\y\y\c\u\5\p\5\n\2\t\x\3\r\n\u\6\q\n\h\n\4\u\z\9\8\a\q\0\5\a\h\q\b\9\1\w\g\9\7\c\3\t\n\r\p\o ]] 00:11:03.066 00:11:03.066 real 0m1.948s 00:11:03.066 user 0m1.140s 00:11:03.066 sys 0m0.626s 00:11:03.067 11:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.067 11:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:11:03.067 ************************************ 00:11:03.067 END TEST dd_flag_nofollow 00:11:03.067 ************************************ 00:11:03.067 11:21:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:11:03.067 11:21:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:03.067 11:21:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.067 11:21:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:03.067 ************************************ 00:11:03.067 START TEST dd_flag_noatime 00:11:03.067 ************************************ 00:11:03.067 11:21:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:11:03.067 11:21:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:11:03.067 11:21:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:11:03.067 11:21:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:11:03.067 11:21:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:11:03.067 11:21:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:11:03.067 11:21:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:03.067 11:21:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1728300118 00:11:03.067 11:21:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:03.067 11:21:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1728300118 00:11:03.067 11:21:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:11:04.439 11:21:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:04.439 [2024-10-07 11:21:59.647521] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:04.439 [2024-10-07 11:21:59.647649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60360 ] 00:11:04.439 [2024-10-07 11:21:59.787264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.439 [2024-10-07 11:21:59.913886] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.695 [2024-10-07 11:21:59.971685] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:04.695  [2024-10-07T11:22:00.476Z] Copying: 512/512 [B] (average 500 kBps) 00:11:04.953 00:11:04.953 11:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:04.953 11:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1728300118 )) 00:11:04.953 11:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:04.953 11:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1728300118 )) 00:11:04.953 11:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:04.953 [2024-10-07 11:22:00.299773] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:04.953 [2024-10-07 11:22:00.299911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60374 ] 00:11:04.953 [2024-10-07 11:22:00.435667] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.210 [2024-10-07 11:22:00.562022] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.210 [2024-10-07 11:22:00.619516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:05.210  [2024-10-07T11:22:00.991Z] Copying: 512/512 [B] (average 500 kBps) 00:11:05.468 00:11:05.468 11:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:05.468 11:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1728300120 )) 00:11:05.468 00:11:05.468 real 0m2.324s 00:11:05.468 user 0m0.757s 00:11:05.468 sys 0m0.615s 00:11:05.468 11:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.468 11:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:11:05.468 ************************************ 00:11:05.468 END TEST dd_flag_noatime 00:11:05.468 ************************************ 00:11:05.468 11:22:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:11:05.468 11:22:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:05.468 11:22:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.468 11:22:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:05.468 ************************************ 00:11:05.468 START TEST dd_flags_misc 00:11:05.468 ************************************ 00:11:05.468 11:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:11:05.468 11:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:11:05.468 11:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:11:05.468 11:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:11:05.468 11:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:05.468 11:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:11:05.468 11:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:11:05.468 11:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:11:05.468 11:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:05.468 11:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:05.727 [2024-10-07 11:22:01.009856] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:05.727 [2024-10-07 11:22:01.009997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60408 ] 00:11:05.727 [2024-10-07 11:22:01.149128] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.985 [2024-10-07 11:22:01.268207] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.985 [2024-10-07 11:22:01.325633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:05.985  [2024-10-07T11:22:01.766Z] Copying: 512/512 [B] (average 500 kBps) 00:11:06.243 00:11:06.243 11:22:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xrtw9m8jy0se4pjhwroxkqaj0rd3v20mjmakc1p9kvcl61319gffjxw5hitifq2wkegkyr2smulfrklghumrdj5wdogzusgn13cwdno31ku4fe3ib4224ye09huhw3kihkw4pa3m6hcroqwr0ylg0oh029zfkh9f25xb09utjob1d7vr2hztpc9sxi3dxxclaygr08gt3qbgjl5s7d3ih2i4fa4iidwacnyij3sijhccq9xl2pn2brvq5rsao9f2wmqpqffvt9exkajoogle7rck1rv0qgnh0eydppmu8ebos44f324uuoh5opfdc187g6b909ux864p82gp8jzejsrs1cea77xfp3qos4jpzisd36jmml2lyp1l4upfce5w76fva7s94g76bfrqlqg2lmvqsr6lik9yew0soynedhxqcb6vx7x30getj3a4vlkxd08r2r540s9qies1m73ahwmpsjc36b4217fx3uohlkq3658kwuuvne4fyatqgda8 == \x\r\t\w\9\m\8\j\y\0\s\e\4\p\j\h\w\r\o\x\k\q\a\j\0\r\d\3\v\2\0\m\j\m\a\k\c\1\p\9\k\v\c\l\6\1\3\1\9\g\f\f\j\x\w\5\h\i\t\i\f\q\2\w\k\e\g\k\y\r\2\s\m\u\l\f\r\k\l\g\h\u\m\r\d\j\5\w\d\o\g\z\u\s\g\n\1\3\c\w\d\n\o\3\1\k\u\4\f\e\3\i\b\4\2\2\4\y\e\0\9\h\u\h\w\3\k\i\h\k\w\4\p\a\3\m\6\h\c\r\o\q\w\r\0\y\l\g\0\o\h\0\2\9\z\f\k\h\9\f\2\5\x\b\0\9\u\t\j\o\b\1\d\7\v\r\2\h\z\t\p\c\9\s\x\i\3\d\x\x\c\l\a\y\g\r\0\8\g\t\3\q\b\g\j\l\5\s\7\d\3\i\h\2\i\4\f\a\4\i\i\d\w\a\c\n\y\i\j\3\s\i\j\h\c\c\q\9\x\l\2\p\n\2\b\r\v\q\5\r\s\a\o\9\f\2\w\m\q\p\q\f\f\v\t\9\e\x\k\a\j\o\o\g\l\e\7\r\c\k\1\r\v\0\q\g\n\h\0\e\y\d\p\p\m\u\8\e\b\o\s\4\4\f\3\2\4\u\u\o\h\5\o\p\f\d\c\1\8\7\g\6\b\9\0\9\u\x\8\6\4\p\8\2\g\p\8\j\z\e\j\s\r\s\1\c\e\a\7\7\x\f\p\3\q\o\s\4\j\p\z\i\s\d\3\6\j\m\m\l\2\l\y\p\1\l\4\u\p\f\c\e\5\w\7\6\f\v\a\7\s\9\4\g\7\6\b\f\r\q\l\q\g\2\l\m\v\q\s\r\6\l\i\k\9\y\e\w\0\s\o\y\n\e\d\h\x\q\c\b\6\v\x\7\x\3\0\g\e\t\j\3\a\4\v\l\k\x\d\0\8\r\2\r\5\4\0\s\9\q\i\e\s\1\m\7\3\a\h\w\m\p\s\j\c\3\6\b\4\2\1\7\f\x\3\u\o\h\l\k\q\3\6\5\8\k\w\u\u\v\n\e\4\f\y\a\t\q\g\d\a\8 ]] 00:11:06.243 11:22:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:06.243 11:22:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:06.243 [2024-10-07 11:22:01.665572] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:06.243 [2024-10-07 11:22:01.665678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60416 ] 00:11:06.501 [2024-10-07 11:22:01.805566] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.501 [2024-10-07 11:22:01.924387] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.501 [2024-10-07 11:22:01.982544] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:06.501  [2024-10-07T11:22:02.284Z] Copying: 512/512 [B] (average 500 kBps) 00:11:06.761 00:11:06.761 11:22:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xrtw9m8jy0se4pjhwroxkqaj0rd3v20mjmakc1p9kvcl61319gffjxw5hitifq2wkegkyr2smulfrklghumrdj5wdogzusgn13cwdno31ku4fe3ib4224ye09huhw3kihkw4pa3m6hcroqwr0ylg0oh029zfkh9f25xb09utjob1d7vr2hztpc9sxi3dxxclaygr08gt3qbgjl5s7d3ih2i4fa4iidwacnyij3sijhccq9xl2pn2brvq5rsao9f2wmqpqffvt9exkajoogle7rck1rv0qgnh0eydppmu8ebos44f324uuoh5opfdc187g6b909ux864p82gp8jzejsrs1cea77xfp3qos4jpzisd36jmml2lyp1l4upfce5w76fva7s94g76bfrqlqg2lmvqsr6lik9yew0soynedhxqcb6vx7x30getj3a4vlkxd08r2r540s9qies1m73ahwmpsjc36b4217fx3uohlkq3658kwuuvne4fyatqgda8 == \x\r\t\w\9\m\8\j\y\0\s\e\4\p\j\h\w\r\o\x\k\q\a\j\0\r\d\3\v\2\0\m\j\m\a\k\c\1\p\9\k\v\c\l\6\1\3\1\9\g\f\f\j\x\w\5\h\i\t\i\f\q\2\w\k\e\g\k\y\r\2\s\m\u\l\f\r\k\l\g\h\u\m\r\d\j\5\w\d\o\g\z\u\s\g\n\1\3\c\w\d\n\o\3\1\k\u\4\f\e\3\i\b\4\2\2\4\y\e\0\9\h\u\h\w\3\k\i\h\k\w\4\p\a\3\m\6\h\c\r\o\q\w\r\0\y\l\g\0\o\h\0\2\9\z\f\k\h\9\f\2\5\x\b\0\9\u\t\j\o\b\1\d\7\v\r\2\h\z\t\p\c\9\s\x\i\3\d\x\x\c\l\a\y\g\r\0\8\g\t\3\q\b\g\j\l\5\s\7\d\3\i\h\2\i\4\f\a\4\i\i\d\w\a\c\n\y\i\j\3\s\i\j\h\c\c\q\9\x\l\2\p\n\2\b\r\v\q\5\r\s\a\o\9\f\2\w\m\q\p\q\f\f\v\t\9\e\x\k\a\j\o\o\g\l\e\7\r\c\k\1\r\v\0\q\g\n\h\0\e\y\d\p\p\m\u\8\e\b\o\s\4\4\f\3\2\4\u\u\o\h\5\o\p\f\d\c\1\8\7\g\6\b\9\0\9\u\x\8\6\4\p\8\2\g\p\8\j\z\e\j\s\r\s\1\c\e\a\7\7\x\f\p\3\q\o\s\4\j\p\z\i\s\d\3\6\j\m\m\l\2\l\y\p\1\l\4\u\p\f\c\e\5\w\7\6\f\v\a\7\s\9\4\g\7\6\b\f\r\q\l\q\g\2\l\m\v\q\s\r\6\l\i\k\9\y\e\w\0\s\o\y\n\e\d\h\x\q\c\b\6\v\x\7\x\3\0\g\e\t\j\3\a\4\v\l\k\x\d\0\8\r\2\r\5\4\0\s\9\q\i\e\s\1\m\7\3\a\h\w\m\p\s\j\c\3\6\b\4\2\1\7\f\x\3\u\o\h\l\k\q\3\6\5\8\k\w\u\u\v\n\e\4\f\y\a\t\q\g\d\a\8 ]] 00:11:06.761 11:22:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:06.761 11:22:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:07.018 [2024-10-07 11:22:02.295279] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:07.018 [2024-10-07 11:22:02.295422] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60427 ] 00:11:07.019 [2024-10-07 11:22:02.431492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.277 [2024-10-07 11:22:02.548992] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.277 [2024-10-07 11:22:02.604861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:07.277  [2024-10-07T11:22:03.058Z] Copying: 512/512 [B] (average 125 kBps) 00:11:07.535 00:11:07.535 11:22:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xrtw9m8jy0se4pjhwroxkqaj0rd3v20mjmakc1p9kvcl61319gffjxw5hitifq2wkegkyr2smulfrklghumrdj5wdogzusgn13cwdno31ku4fe3ib4224ye09huhw3kihkw4pa3m6hcroqwr0ylg0oh029zfkh9f25xb09utjob1d7vr2hztpc9sxi3dxxclaygr08gt3qbgjl5s7d3ih2i4fa4iidwacnyij3sijhccq9xl2pn2brvq5rsao9f2wmqpqffvt9exkajoogle7rck1rv0qgnh0eydppmu8ebos44f324uuoh5opfdc187g6b909ux864p82gp8jzejsrs1cea77xfp3qos4jpzisd36jmml2lyp1l4upfce5w76fva7s94g76bfrqlqg2lmvqsr6lik9yew0soynedhxqcb6vx7x30getj3a4vlkxd08r2r540s9qies1m73ahwmpsjc36b4217fx3uohlkq3658kwuuvne4fyatqgda8 == \x\r\t\w\9\m\8\j\y\0\s\e\4\p\j\h\w\r\o\x\k\q\a\j\0\r\d\3\v\2\0\m\j\m\a\k\c\1\p\9\k\v\c\l\6\1\3\1\9\g\f\f\j\x\w\5\h\i\t\i\f\q\2\w\k\e\g\k\y\r\2\s\m\u\l\f\r\k\l\g\h\u\m\r\d\j\5\w\d\o\g\z\u\s\g\n\1\3\c\w\d\n\o\3\1\k\u\4\f\e\3\i\b\4\2\2\4\y\e\0\9\h\u\h\w\3\k\i\h\k\w\4\p\a\3\m\6\h\c\r\o\q\w\r\0\y\l\g\0\o\h\0\2\9\z\f\k\h\9\f\2\5\x\b\0\9\u\t\j\o\b\1\d\7\v\r\2\h\z\t\p\c\9\s\x\i\3\d\x\x\c\l\a\y\g\r\0\8\g\t\3\q\b\g\j\l\5\s\7\d\3\i\h\2\i\4\f\a\4\i\i\d\w\a\c\n\y\i\j\3\s\i\j\h\c\c\q\9\x\l\2\p\n\2\b\r\v\q\5\r\s\a\o\9\f\2\w\m\q\p\q\f\f\v\t\9\e\x\k\a\j\o\o\g\l\e\7\r\c\k\1\r\v\0\q\g\n\h\0\e\y\d\p\p\m\u\8\e\b\o\s\4\4\f\3\2\4\u\u\o\h\5\o\p\f\d\c\1\8\7\g\6\b\9\0\9\u\x\8\6\4\p\8\2\g\p\8\j\z\e\j\s\r\s\1\c\e\a\7\7\x\f\p\3\q\o\s\4\j\p\z\i\s\d\3\6\j\m\m\l\2\l\y\p\1\l\4\u\p\f\c\e\5\w\7\6\f\v\a\7\s\9\4\g\7\6\b\f\r\q\l\q\g\2\l\m\v\q\s\r\6\l\i\k\9\y\e\w\0\s\o\y\n\e\d\h\x\q\c\b\6\v\x\7\x\3\0\g\e\t\j\3\a\4\v\l\k\x\d\0\8\r\2\r\5\4\0\s\9\q\i\e\s\1\m\7\3\a\h\w\m\p\s\j\c\3\6\b\4\2\1\7\f\x\3\u\o\h\l\k\q\3\6\5\8\k\w\u\u\v\n\e\4\f\y\a\t\q\g\d\a\8 ]] 00:11:07.535 11:22:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:07.535 11:22:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:07.535 [2024-10-07 11:22:02.942421] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:07.535 [2024-10-07 11:22:02.942564] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60436 ] 00:11:07.793 [2024-10-07 11:22:03.081076] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.793 [2024-10-07 11:22:03.202551] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.793 [2024-10-07 11:22:03.257736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:07.793  [2024-10-07T11:22:03.576Z] Copying: 512/512 [B] (average 166 kBps) 00:11:08.053 00:11:08.053 11:22:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xrtw9m8jy0se4pjhwroxkqaj0rd3v20mjmakc1p9kvcl61319gffjxw5hitifq2wkegkyr2smulfrklghumrdj5wdogzusgn13cwdno31ku4fe3ib4224ye09huhw3kihkw4pa3m6hcroqwr0ylg0oh029zfkh9f25xb09utjob1d7vr2hztpc9sxi3dxxclaygr08gt3qbgjl5s7d3ih2i4fa4iidwacnyij3sijhccq9xl2pn2brvq5rsao9f2wmqpqffvt9exkajoogle7rck1rv0qgnh0eydppmu8ebos44f324uuoh5opfdc187g6b909ux864p82gp8jzejsrs1cea77xfp3qos4jpzisd36jmml2lyp1l4upfce5w76fva7s94g76bfrqlqg2lmvqsr6lik9yew0soynedhxqcb6vx7x30getj3a4vlkxd08r2r540s9qies1m73ahwmpsjc36b4217fx3uohlkq3658kwuuvne4fyatqgda8 == \x\r\t\w\9\m\8\j\y\0\s\e\4\p\j\h\w\r\o\x\k\q\a\j\0\r\d\3\v\2\0\m\j\m\a\k\c\1\p\9\k\v\c\l\6\1\3\1\9\g\f\f\j\x\w\5\h\i\t\i\f\q\2\w\k\e\g\k\y\r\2\s\m\u\l\f\r\k\l\g\h\u\m\r\d\j\5\w\d\o\g\z\u\s\g\n\1\3\c\w\d\n\o\3\1\k\u\4\f\e\3\i\b\4\2\2\4\y\e\0\9\h\u\h\w\3\k\i\h\k\w\4\p\a\3\m\6\h\c\r\o\q\w\r\0\y\l\g\0\o\h\0\2\9\z\f\k\h\9\f\2\5\x\b\0\9\u\t\j\o\b\1\d\7\v\r\2\h\z\t\p\c\9\s\x\i\3\d\x\x\c\l\a\y\g\r\0\8\g\t\3\q\b\g\j\l\5\s\7\d\3\i\h\2\i\4\f\a\4\i\i\d\w\a\c\n\y\i\j\3\s\i\j\h\c\c\q\9\x\l\2\p\n\2\b\r\v\q\5\r\s\a\o\9\f\2\w\m\q\p\q\f\f\v\t\9\e\x\k\a\j\o\o\g\l\e\7\r\c\k\1\r\v\0\q\g\n\h\0\e\y\d\p\p\m\u\8\e\b\o\s\4\4\f\3\2\4\u\u\o\h\5\o\p\f\d\c\1\8\7\g\6\b\9\0\9\u\x\8\6\4\p\8\2\g\p\8\j\z\e\j\s\r\s\1\c\e\a\7\7\x\f\p\3\q\o\s\4\j\p\z\i\s\d\3\6\j\m\m\l\2\l\y\p\1\l\4\u\p\f\c\e\5\w\7\6\f\v\a\7\s\9\4\g\7\6\b\f\r\q\l\q\g\2\l\m\v\q\s\r\6\l\i\k\9\y\e\w\0\s\o\y\n\e\d\h\x\q\c\b\6\v\x\7\x\3\0\g\e\t\j\3\a\4\v\l\k\x\d\0\8\r\2\r\5\4\0\s\9\q\i\e\s\1\m\7\3\a\h\w\m\p\s\j\c\3\6\b\4\2\1\7\f\x\3\u\o\h\l\k\q\3\6\5\8\k\w\u\u\v\n\e\4\f\y\a\t\q\g\d\a\8 ]] 00:11:08.053 11:22:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:08.053 11:22:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:11:08.053 11:22:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:11:08.053 11:22:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:11:08.053 11:22:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:08.053 11:22:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:08.320 [2024-10-07 11:22:03.598459] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:08.320 [2024-10-07 11:22:03.598573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60446 ] 00:11:08.320 [2024-10-07 11:22:03.734377] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.578 [2024-10-07 11:22:03.851469] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.578 [2024-10-07 11:22:03.909604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:08.578  [2024-10-07T11:22:04.360Z] Copying: 512/512 [B] (average 500 kBps) 00:11:08.837 00:11:08.838 11:22:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4i1laubepu09zcuzh19ih0oqp52p3fxjqye8d0pksf0rqniqiigrqec2u52kdjjcs50gpi73x2num6567rtjvtxshmsuo5jzyk3765v85i4jio1iay07qmt75pvwxb0rvbpcp1lie7784x5vgcveyoyc8cnerlbayt6w2nxypcucpfaudjv5e6xbk7q0ggeafllir5bh0ni09k48fp5lmzmp4t4tzgq2o2566es2bj27jb2o1nw3xsamc1z03vtio2hgidxpzyhw3mq5napwn59czvp3anxl6hcu1i5e120pez28buuqln9tzqdtei25lqxelxdyz7wo6q2hkf4co06dmirve0hcmpe22mw0tq11xd12pecw2audyxppxeo3oslaruq67zcb2bf8n47tp24rq11feek69h2jn4vgz07olqq0yveb6zxgb9j9l5g1dolcaqulh0jt4mow43y4pa5wn5muknih120wvn19867sv2fbxomgg6e5rl3ilidi == \4\i\1\l\a\u\b\e\p\u\0\9\z\c\u\z\h\1\9\i\h\0\o\q\p\5\2\p\3\f\x\j\q\y\e\8\d\0\p\k\s\f\0\r\q\n\i\q\i\i\g\r\q\e\c\2\u\5\2\k\d\j\j\c\s\5\0\g\p\i\7\3\x\2\n\u\m\6\5\6\7\r\t\j\v\t\x\s\h\m\s\u\o\5\j\z\y\k\3\7\6\5\v\8\5\i\4\j\i\o\1\i\a\y\0\7\q\m\t\7\5\p\v\w\x\b\0\r\v\b\p\c\p\1\l\i\e\7\7\8\4\x\5\v\g\c\v\e\y\o\y\c\8\c\n\e\r\l\b\a\y\t\6\w\2\n\x\y\p\c\u\c\p\f\a\u\d\j\v\5\e\6\x\b\k\7\q\0\g\g\e\a\f\l\l\i\r\5\b\h\0\n\i\0\9\k\4\8\f\p\5\l\m\z\m\p\4\t\4\t\z\g\q\2\o\2\5\6\6\e\s\2\b\j\2\7\j\b\2\o\1\n\w\3\x\s\a\m\c\1\z\0\3\v\t\i\o\2\h\g\i\d\x\p\z\y\h\w\3\m\q\5\n\a\p\w\n\5\9\c\z\v\p\3\a\n\x\l\6\h\c\u\1\i\5\e\1\2\0\p\e\z\2\8\b\u\u\q\l\n\9\t\z\q\d\t\e\i\2\5\l\q\x\e\l\x\d\y\z\7\w\o\6\q\2\h\k\f\4\c\o\0\6\d\m\i\r\v\e\0\h\c\m\p\e\2\2\m\w\0\t\q\1\1\x\d\1\2\p\e\c\w\2\a\u\d\y\x\p\p\x\e\o\3\o\s\l\a\r\u\q\6\7\z\c\b\2\b\f\8\n\4\7\t\p\2\4\r\q\1\1\f\e\e\k\6\9\h\2\j\n\4\v\g\z\0\7\o\l\q\q\0\y\v\e\b\6\z\x\g\b\9\j\9\l\5\g\1\d\o\l\c\a\q\u\l\h\0\j\t\4\m\o\w\4\3\y\4\p\a\5\w\n\5\m\u\k\n\i\h\1\2\0\w\v\n\1\9\8\6\7\s\v\2\f\b\x\o\m\g\g\6\e\5\r\l\3\i\l\i\d\i ]] 00:11:08.838 11:22:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:08.838 11:22:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:08.838 [2024-10-07 11:22:04.258118] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:08.838 [2024-10-07 11:22:04.258238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60461 ] 00:11:09.096 [2024-10-07 11:22:04.399389] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.096 [2024-10-07 11:22:04.539406] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.096 [2024-10-07 11:22:04.601038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:09.354  [2024-10-07T11:22:04.877Z] Copying: 512/512 [B] (average 500 kBps) 00:11:09.354 00:11:09.613 11:22:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4i1laubepu09zcuzh19ih0oqp52p3fxjqye8d0pksf0rqniqiigrqec2u52kdjjcs50gpi73x2num6567rtjvtxshmsuo5jzyk3765v85i4jio1iay07qmt75pvwxb0rvbpcp1lie7784x5vgcveyoyc8cnerlbayt6w2nxypcucpfaudjv5e6xbk7q0ggeafllir5bh0ni09k48fp5lmzmp4t4tzgq2o2566es2bj27jb2o1nw3xsamc1z03vtio2hgidxpzyhw3mq5napwn59czvp3anxl6hcu1i5e120pez28buuqln9tzqdtei25lqxelxdyz7wo6q2hkf4co06dmirve0hcmpe22mw0tq11xd12pecw2audyxppxeo3oslaruq67zcb2bf8n47tp24rq11feek69h2jn4vgz07olqq0yveb6zxgb9j9l5g1dolcaqulh0jt4mow43y4pa5wn5muknih120wvn19867sv2fbxomgg6e5rl3ilidi == \4\i\1\l\a\u\b\e\p\u\0\9\z\c\u\z\h\1\9\i\h\0\o\q\p\5\2\p\3\f\x\j\q\y\e\8\d\0\p\k\s\f\0\r\q\n\i\q\i\i\g\r\q\e\c\2\u\5\2\k\d\j\j\c\s\5\0\g\p\i\7\3\x\2\n\u\m\6\5\6\7\r\t\j\v\t\x\s\h\m\s\u\o\5\j\z\y\k\3\7\6\5\v\8\5\i\4\j\i\o\1\i\a\y\0\7\q\m\t\7\5\p\v\w\x\b\0\r\v\b\p\c\p\1\l\i\e\7\7\8\4\x\5\v\g\c\v\e\y\o\y\c\8\c\n\e\r\l\b\a\y\t\6\w\2\n\x\y\p\c\u\c\p\f\a\u\d\j\v\5\e\6\x\b\k\7\q\0\g\g\e\a\f\l\l\i\r\5\b\h\0\n\i\0\9\k\4\8\f\p\5\l\m\z\m\p\4\t\4\t\z\g\q\2\o\2\5\6\6\e\s\2\b\j\2\7\j\b\2\o\1\n\w\3\x\s\a\m\c\1\z\0\3\v\t\i\o\2\h\g\i\d\x\p\z\y\h\w\3\m\q\5\n\a\p\w\n\5\9\c\z\v\p\3\a\n\x\l\6\h\c\u\1\i\5\e\1\2\0\p\e\z\2\8\b\u\u\q\l\n\9\t\z\q\d\t\e\i\2\5\l\q\x\e\l\x\d\y\z\7\w\o\6\q\2\h\k\f\4\c\o\0\6\d\m\i\r\v\e\0\h\c\m\p\e\2\2\m\w\0\t\q\1\1\x\d\1\2\p\e\c\w\2\a\u\d\y\x\p\p\x\e\o\3\o\s\l\a\r\u\q\6\7\z\c\b\2\b\f\8\n\4\7\t\p\2\4\r\q\1\1\f\e\e\k\6\9\h\2\j\n\4\v\g\z\0\7\o\l\q\q\0\y\v\e\b\6\z\x\g\b\9\j\9\l\5\g\1\d\o\l\c\a\q\u\l\h\0\j\t\4\m\o\w\4\3\y\4\p\a\5\w\n\5\m\u\k\n\i\h\1\2\0\w\v\n\1\9\8\6\7\s\v\2\f\b\x\o\m\g\g\6\e\5\r\l\3\i\l\i\d\i ]] 00:11:09.613 11:22:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:09.613 11:22:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:09.613 [2024-10-07 11:22:04.937137] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:09.613 [2024-10-07 11:22:04.937278] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60470 ] 00:11:09.613 [2024-10-07 11:22:05.079628] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.872 [2024-10-07 11:22:05.214391] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.872 [2024-10-07 11:22:05.275114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:09.872  [2024-10-07T11:22:05.654Z] Copying: 512/512 [B] (average 166 kBps) 00:11:10.131 00:11:10.131 11:22:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4i1laubepu09zcuzh19ih0oqp52p3fxjqye8d0pksf0rqniqiigrqec2u52kdjjcs50gpi73x2num6567rtjvtxshmsuo5jzyk3765v85i4jio1iay07qmt75pvwxb0rvbpcp1lie7784x5vgcveyoyc8cnerlbayt6w2nxypcucpfaudjv5e6xbk7q0ggeafllir5bh0ni09k48fp5lmzmp4t4tzgq2o2566es2bj27jb2o1nw3xsamc1z03vtio2hgidxpzyhw3mq5napwn59czvp3anxl6hcu1i5e120pez28buuqln9tzqdtei25lqxelxdyz7wo6q2hkf4co06dmirve0hcmpe22mw0tq11xd12pecw2audyxppxeo3oslaruq67zcb2bf8n47tp24rq11feek69h2jn4vgz07olqq0yveb6zxgb9j9l5g1dolcaqulh0jt4mow43y4pa5wn5muknih120wvn19867sv2fbxomgg6e5rl3ilidi == \4\i\1\l\a\u\b\e\p\u\0\9\z\c\u\z\h\1\9\i\h\0\o\q\p\5\2\p\3\f\x\j\q\y\e\8\d\0\p\k\s\f\0\r\q\n\i\q\i\i\g\r\q\e\c\2\u\5\2\k\d\j\j\c\s\5\0\g\p\i\7\3\x\2\n\u\m\6\5\6\7\r\t\j\v\t\x\s\h\m\s\u\o\5\j\z\y\k\3\7\6\5\v\8\5\i\4\j\i\o\1\i\a\y\0\7\q\m\t\7\5\p\v\w\x\b\0\r\v\b\p\c\p\1\l\i\e\7\7\8\4\x\5\v\g\c\v\e\y\o\y\c\8\c\n\e\r\l\b\a\y\t\6\w\2\n\x\y\p\c\u\c\p\f\a\u\d\j\v\5\e\6\x\b\k\7\q\0\g\g\e\a\f\l\l\i\r\5\b\h\0\n\i\0\9\k\4\8\f\p\5\l\m\z\m\p\4\t\4\t\z\g\q\2\o\2\5\6\6\e\s\2\b\j\2\7\j\b\2\o\1\n\w\3\x\s\a\m\c\1\z\0\3\v\t\i\o\2\h\g\i\d\x\p\z\y\h\w\3\m\q\5\n\a\p\w\n\5\9\c\z\v\p\3\a\n\x\l\6\h\c\u\1\i\5\e\1\2\0\p\e\z\2\8\b\u\u\q\l\n\9\t\z\q\d\t\e\i\2\5\l\q\x\e\l\x\d\y\z\7\w\o\6\q\2\h\k\f\4\c\o\0\6\d\m\i\r\v\e\0\h\c\m\p\e\2\2\m\w\0\t\q\1\1\x\d\1\2\p\e\c\w\2\a\u\d\y\x\p\p\x\e\o\3\o\s\l\a\r\u\q\6\7\z\c\b\2\b\f\8\n\4\7\t\p\2\4\r\q\1\1\f\e\e\k\6\9\h\2\j\n\4\v\g\z\0\7\o\l\q\q\0\y\v\e\b\6\z\x\g\b\9\j\9\l\5\g\1\d\o\l\c\a\q\u\l\h\0\j\t\4\m\o\w\4\3\y\4\p\a\5\w\n\5\m\u\k\n\i\h\1\2\0\w\v\n\1\9\8\6\7\s\v\2\f\b\x\o\m\g\g\6\e\5\r\l\3\i\l\i\d\i ]] 00:11:10.131 11:22:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:10.131 11:22:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:10.131 [2024-10-07 11:22:05.604889] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:10.131 [2024-10-07 11:22:05.604997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60480 ] 00:11:10.391 [2024-10-07 11:22:05.744874] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.391 [2024-10-07 11:22:05.870570] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.649 [2024-10-07 11:22:05.928922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:10.649  [2024-10-07T11:22:06.432Z] Copying: 512/512 [B] (average 250 kBps) 00:11:10.909 00:11:10.909 ************************************ 00:11:10.909 END TEST dd_flags_misc 00:11:10.909 ************************************ 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4i1laubepu09zcuzh19ih0oqp52p3fxjqye8d0pksf0rqniqiigrqec2u52kdjjcs50gpi73x2num6567rtjvtxshmsuo5jzyk3765v85i4jio1iay07qmt75pvwxb0rvbpcp1lie7784x5vgcveyoyc8cnerlbayt6w2nxypcucpfaudjv5e6xbk7q0ggeafllir5bh0ni09k48fp5lmzmp4t4tzgq2o2566es2bj27jb2o1nw3xsamc1z03vtio2hgidxpzyhw3mq5napwn59czvp3anxl6hcu1i5e120pez28buuqln9tzqdtei25lqxelxdyz7wo6q2hkf4co06dmirve0hcmpe22mw0tq11xd12pecw2audyxppxeo3oslaruq67zcb2bf8n47tp24rq11feek69h2jn4vgz07olqq0yveb6zxgb9j9l5g1dolcaqulh0jt4mow43y4pa5wn5muknih120wvn19867sv2fbxomgg6e5rl3ilidi == \4\i\1\l\a\u\b\e\p\u\0\9\z\c\u\z\h\1\9\i\h\0\o\q\p\5\2\p\3\f\x\j\q\y\e\8\d\0\p\k\s\f\0\r\q\n\i\q\i\i\g\r\q\e\c\2\u\5\2\k\d\j\j\c\s\5\0\g\p\i\7\3\x\2\n\u\m\6\5\6\7\r\t\j\v\t\x\s\h\m\s\u\o\5\j\z\y\k\3\7\6\5\v\8\5\i\4\j\i\o\1\i\a\y\0\7\q\m\t\7\5\p\v\w\x\b\0\r\v\b\p\c\p\1\l\i\e\7\7\8\4\x\5\v\g\c\v\e\y\o\y\c\8\c\n\e\r\l\b\a\y\t\6\w\2\n\x\y\p\c\u\c\p\f\a\u\d\j\v\5\e\6\x\b\k\7\q\0\g\g\e\a\f\l\l\i\r\5\b\h\0\n\i\0\9\k\4\8\f\p\5\l\m\z\m\p\4\t\4\t\z\g\q\2\o\2\5\6\6\e\s\2\b\j\2\7\j\b\2\o\1\n\w\3\x\s\a\m\c\1\z\0\3\v\t\i\o\2\h\g\i\d\x\p\z\y\h\w\3\m\q\5\n\a\p\w\n\5\9\c\z\v\p\3\a\n\x\l\6\h\c\u\1\i\5\e\1\2\0\p\e\z\2\8\b\u\u\q\l\n\9\t\z\q\d\t\e\i\2\5\l\q\x\e\l\x\d\y\z\7\w\o\6\q\2\h\k\f\4\c\o\0\6\d\m\i\r\v\e\0\h\c\m\p\e\2\2\m\w\0\t\q\1\1\x\d\1\2\p\e\c\w\2\a\u\d\y\x\p\p\x\e\o\3\o\s\l\a\r\u\q\6\7\z\c\b\2\b\f\8\n\4\7\t\p\2\4\r\q\1\1\f\e\e\k\6\9\h\2\j\n\4\v\g\z\0\7\o\l\q\q\0\y\v\e\b\6\z\x\g\b\9\j\9\l\5\g\1\d\o\l\c\a\q\u\l\h\0\j\t\4\m\o\w\4\3\y\4\p\a\5\w\n\5\m\u\k\n\i\h\1\2\0\w\v\n\1\9\8\6\7\s\v\2\f\b\x\o\m\g\g\6\e\5\r\l\3\i\l\i\d\i ]] 00:11:10.909 00:11:10.909 real 0m5.259s 00:11:10.909 user 0m3.112s 00:11:10.909 sys 0m2.434s 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:11:10.909 * Second test run, disabling liburing, forcing AIO 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:10.909 ************************************ 00:11:10.909 START TEST dd_flag_append_forced_aio 00:11:10.909 ************************************ 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=imrrw9ef171rnnaiicz7mo9qr7cp4tfe 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=20tl04m0u3wez1a96ku027v898be6few 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s imrrw9ef171rnnaiicz7mo9qr7cp4tfe 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 20tl04m0u3wez1a96ku027v898be6few 00:11:10.909 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:11:10.909 [2024-10-07 11:22:06.323372] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:10.909 [2024-10-07 11:22:06.323498] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60514 ] 00:11:11.168 [2024-10-07 11:22:06.464317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.168 [2024-10-07 11:22:06.584851] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.168 [2024-10-07 11:22:06.640539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:11.168  [2024-10-07T11:22:06.949Z] Copying: 32/32 [B] (average 31 kBps) 00:11:11.426 00:11:11.426 ************************************ 00:11:11.426 END TEST dd_flag_append_forced_aio 00:11:11.426 ************************************ 00:11:11.426 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 20tl04m0u3wez1a96ku027v898be6fewimrrw9ef171rnnaiicz7mo9qr7cp4tfe == \2\0\t\l\0\4\m\0\u\3\w\e\z\1\a\9\6\k\u\0\2\7\v\8\9\8\b\e\6\f\e\w\i\m\r\r\w\9\e\f\1\7\1\r\n\n\a\i\i\c\z\7\m\o\9\q\r\7\c\p\4\t\f\e ]] 00:11:11.426 00:11:11.426 real 0m0.677s 00:11:11.426 user 0m0.399s 00:11:11.426 sys 0m0.158s 00:11:11.426 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.426 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:11.684 11:22:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:11:11.684 11:22:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:11.684 11:22:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.684 11:22:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:11.684 ************************************ 00:11:11.684 START TEST dd_flag_directory_forced_aio 00:11:11.684 ************************************ 00:11:11.684 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:11:11.684 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:11.684 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:11:11.684 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:11.684 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:11.684 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:11.684 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:11.684 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:11.684 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:11.684 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:11.684 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:11.684 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:11.684 11:22:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:11.684 [2024-10-07 11:22:07.047860] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:11.684 [2024-10-07 11:22:07.047978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60540 ] 00:11:11.684 [2024-10-07 11:22:07.188110] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.942 [2024-10-07 11:22:07.303161] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.942 [2024-10-07 11:22:07.363208] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:11.942 [2024-10-07 11:22:07.403752] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:11.943 [2024-10-07 11:22:07.403809] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:11.943 [2024-10-07 11:22:07.403840] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:12.201 [2024-10-07 11:22:07.527850] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:12.201 11:22:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:11:12.201 11:22:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:12.201 11:22:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:11:12.201 11:22:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:11:12.201 11:22:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:11:12.201 11:22:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:12.201 11:22:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:12.201 11:22:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:11:12.201 11:22:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:12.201 11:22:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:12.202 11:22:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:12.202 11:22:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:12.202 11:22:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:12.202 11:22:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:12.202 11:22:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:12.202 11:22:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:12.202 11:22:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:12.202 11:22:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:12.202 [2024-10-07 11:22:07.696160] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:12.202 [2024-10-07 11:22:07.696274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60550 ] 00:11:12.460 [2024-10-07 11:22:07.835979] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.460 [2024-10-07 11:22:07.967065] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.719 [2024-10-07 11:22:08.025563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:12.719 [2024-10-07 11:22:08.064551] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:12.719 [2024-10-07 11:22:08.064613] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:12.719 [2024-10-07 11:22:08.064630] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:12.719 [2024-10-07 11:22:08.192972] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:12.979 00:11:12.979 real 0m1.315s 00:11:12.979 user 0m0.757s 00:11:12.979 sys 0m0.345s 00:11:12.979 ************************************ 00:11:12.979 END TEST dd_flag_directory_forced_aio 00:11:12.979 ************************************ 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:12.979 ************************************ 00:11:12.979 START TEST dd_flag_nofollow_forced_aio 00:11:12.979 ************************************ 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:12.979 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:11:12.980 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:12.980 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:12.980 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:12.980 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:12.980 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:12.980 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:12.980 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:12.980 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:12.980 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:12.980 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:12.980 [2024-10-07 11:22:08.414906] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:12.980 [2024-10-07 11:22:08.415013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60584 ] 00:11:13.287 [2024-10-07 11:22:08.556112] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.287 [2024-10-07 11:22:08.670149] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.287 [2024-10-07 11:22:08.725680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:13.287 [2024-10-07 11:22:08.764537] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:13.287 [2024-10-07 11:22:08.764628] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:13.287 [2024-10-07 11:22:08.764660] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:13.545 [2024-10-07 11:22:08.887479] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:13.545 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:11:13.545 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:13.545 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:11:13.545 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:11:13.545 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:11:13.545 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:13.545 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:13.545 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:11:13.545 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:13.545 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:13.545 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:13.545 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:13.545 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:13.545 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:13.545 11:22:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:13.545 11:22:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:13.545 11:22:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:13.545 11:22:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:13.545 [2024-10-07 11:22:09.052331] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:13.545 [2024-10-07 11:22:09.052429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60593 ] 00:11:13.804 [2024-10-07 11:22:09.185440] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.804 [2024-10-07 11:22:09.299125] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.062 [2024-10-07 11:22:09.358580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:14.062 [2024-10-07 11:22:09.397216] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:14.062 [2024-10-07 11:22:09.397286] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:14.062 [2024-10-07 11:22:09.397320] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:14.062 [2024-10-07 11:22:09.520454] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:14.321 11:22:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:11:14.321 11:22:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:14.321 11:22:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:11:14.321 11:22:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:11:14.321 11:22:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:11:14.321 11:22:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:14.321 11:22:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:11:14.321 11:22:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:14.321 11:22:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:14.321 11:22:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:14.321 [2024-10-07 11:22:09.693141] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:14.321 [2024-10-07 11:22:09.693251] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60601 ] 00:11:14.321 [2024-10-07 11:22:09.832044] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.579 [2024-10-07 11:22:09.967290] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.579 [2024-10-07 11:22:10.022430] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:14.579  [2024-10-07T11:22:10.361Z] Copying: 512/512 [B] (average 500 kBps) 00:11:14.838 00:11:14.838 ************************************ 00:11:14.838 END TEST dd_flag_nofollow_forced_aio 00:11:14.838 ************************************ 00:11:14.838 11:22:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ ah0e2knzivg8eh71cytxb7p1rj79uyoz7vc6fmc64tnxrczp1uhl62oka2mvk30u5dgdndulfyh6ejod9a0js5guuztisftuvb4viy3o7913frtyyi7a9g2d2j0vb975dkiuxo459cgculx8z266mux5c00kqk11u9rkty3wqo2ceqp9jif0lph5iu8psxz887ubujvubz9h3cxud5qn584yizr8ajhb0fsszumgue73z9r9x1zql9x10ed9ke3l8omiuxqwcygp6bh0rlzyz24xu1f24oadrcodmlh7ghrqvvtqfokzs50q9ktsg4kzs8wh719qyj0b3u2fbdtp82b8er7y6pa0fupy1p0j4pwh5wvavksvvy8b1dj5amiq1jvj32t4jul53zbqw1lw79zj3bdlvfioh5aj5v6xc2u1nzap8buhpfny50k03dx5w56f65y3veily3fw24g91fv9gi65397tot36mhu0rgeu1s2sifu2yr3pp9czpb2g == \a\h\0\e\2\k\n\z\i\v\g\8\e\h\7\1\c\y\t\x\b\7\p\1\r\j\7\9\u\y\o\z\7\v\c\6\f\m\c\6\4\t\n\x\r\c\z\p\1\u\h\l\6\2\o\k\a\2\m\v\k\3\0\u\5\d\g\d\n\d\u\l\f\y\h\6\e\j\o\d\9\a\0\j\s\5\g\u\u\z\t\i\s\f\t\u\v\b\4\v\i\y\3\o\7\9\1\3\f\r\t\y\y\i\7\a\9\g\2\d\2\j\0\v\b\9\7\5\d\k\i\u\x\o\4\5\9\c\g\c\u\l\x\8\z\2\6\6\m\u\x\5\c\0\0\k\q\k\1\1\u\9\r\k\t\y\3\w\q\o\2\c\e\q\p\9\j\i\f\0\l\p\h\5\i\u\8\p\s\x\z\8\8\7\u\b\u\j\v\u\b\z\9\h\3\c\x\u\d\5\q\n\5\8\4\y\i\z\r\8\a\j\h\b\0\f\s\s\z\u\m\g\u\e\7\3\z\9\r\9\x\1\z\q\l\9\x\1\0\e\d\9\k\e\3\l\8\o\m\i\u\x\q\w\c\y\g\p\6\b\h\0\r\l\z\y\z\2\4\x\u\1\f\2\4\o\a\d\r\c\o\d\m\l\h\7\g\h\r\q\v\v\t\q\f\o\k\z\s\5\0\q\9\k\t\s\g\4\k\z\s\8\w\h\7\1\9\q\y\j\0\b\3\u\2\f\b\d\t\p\8\2\b\8\e\r\7\y\6\p\a\0\f\u\p\y\1\p\0\j\4\p\w\h\5\w\v\a\v\k\s\v\v\y\8\b\1\d\j\5\a\m\i\q\1\j\v\j\3\2\t\4\j\u\l\5\3\z\b\q\w\1\l\w\7\9\z\j\3\b\d\l\v\f\i\o\h\5\a\j\5\v\6\x\c\2\u\1\n\z\a\p\8\b\u\h\p\f\n\y\5\0\k\0\3\d\x\5\w\5\6\f\6\5\y\3\v\e\i\l\y\3\f\w\2\4\g\9\1\f\v\9\g\i\6\5\3\9\7\t\o\t\3\6\m\h\u\0\r\g\e\u\1\s\2\s\i\f\u\2\y\r\3\p\p\9\c\z\p\b\2\g ]] 00:11:14.838 00:11:14.838 real 0m1.951s 00:11:14.838 user 0m1.137s 00:11:14.838 sys 0m0.480s 00:11:14.838 11:22:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:14.838 11:22:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:14.838 11:22:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:11:14.838 11:22:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:14.838 11:22:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:14.838 11:22:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:14.838 ************************************ 00:11:14.838 START TEST dd_flag_noatime_forced_aio 00:11:14.838 ************************************ 00:11:14.838 11:22:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:11:14.838 11:22:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:11:14.838 11:22:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:11:14.838 11:22:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:11:14.838 11:22:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:14.838 11:22:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:14.838 11:22:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:14.838 11:22:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1728300130 00:11:15.097 11:22:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:15.097 11:22:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1728300130 00:11:15.097 11:22:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:11:16.084 11:22:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:16.084 [2024-10-07 11:22:11.428573] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:16.084 [2024-10-07 11:22:11.428689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60647 ] 00:11:16.084 [2024-10-07 11:22:11.570190] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.343 [2024-10-07 11:22:11.714465] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.343 [2024-10-07 11:22:11.773770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:16.343  [2024-10-07T11:22:12.124Z] Copying: 512/512 [B] (average 500 kBps) 00:11:16.601 00:11:16.601 11:22:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:16.601 11:22:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1728300130 )) 00:11:16.601 11:22:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:16.601 11:22:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1728300130 )) 00:11:16.601 11:22:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:16.860 [2024-10-07 11:22:12.138285] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:16.860 [2024-10-07 11:22:12.138407] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60653 ] 00:11:16.860 [2024-10-07 11:22:12.276500] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.118 [2024-10-07 11:22:12.390394] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.118 [2024-10-07 11:22:12.448350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:17.118  [2024-10-07T11:22:12.900Z] Copying: 512/512 [B] (average 500 kBps) 00:11:17.377 00:11:17.377 11:22:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:17.377 ************************************ 00:11:17.377 END TEST dd_flag_noatime_forced_aio 00:11:17.377 ************************************ 00:11:17.377 11:22:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1728300132 )) 00:11:17.377 00:11:17.377 real 0m2.379s 00:11:17.377 user 0m0.804s 00:11:17.377 sys 0m0.330s 00:11:17.377 11:22:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.377 11:22:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:17.377 11:22:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:11:17.377 11:22:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:17.377 11:22:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.377 11:22:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:17.377 ************************************ 00:11:17.377 START TEST dd_flags_misc_forced_aio 00:11:17.377 ************************************ 00:11:17.377 11:22:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:11:17.377 11:22:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:11:17.377 11:22:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:11:17.377 11:22:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:11:17.377 11:22:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:17.377 11:22:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:11:17.377 11:22:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:17.377 11:22:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:17.377 11:22:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:17.377 11:22:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:17.377 [2024-10-07 11:22:12.832978] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:17.377 [2024-10-07 11:22:12.833076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60685 ] 00:11:17.636 [2024-10-07 11:22:12.970179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.636 [2024-10-07 11:22:13.072485] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.636 [2024-10-07 11:22:13.125160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:17.895  [2024-10-07T11:22:13.418Z] Copying: 512/512 [B] (average 500 kBps) 00:11:17.895 00:11:17.895 11:22:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gqgg1tqfm9fdn8fefk38liy0xels4021kditl6729t9p0dofdhmu1un3e4j7m1vdb2py0j21felzbm5m40cpdk6n819vxc5x2s74k7qz804q581gs7u4pb2izwzrfquc7jz73ylugeoaztfsb4aw4a6v8jyc0cls7aztd49to6oawsuf7bye2811l16y23wtf0nddsfuenj48z253uqq0obxkyj18qhkxj7chi6tuwu1qbx84kjdvids3hxni25gsr43zwa0sfkrbrqy09syacuwnbyx5msnmizfps0b42xvy629bw159pvmhcs9ae9yff4mcv6hs0bnwma9h1l9pjqp3yc8dpjo2ohrk9750uaxahb7cdhq8qsk9gw7ih2qh3iabf5jnen66d3iywywwkowkqhlgyx5on1mqefv7mxfctbtzohvsh9dgc88e96l08vutxz52fu0iz9khvdpa28bjs7z1n734fhxpq0be1fxxjqy31kmycku1ynipdaq == \g\q\g\g\1\t\q\f\m\9\f\d\n\8\f\e\f\k\3\8\l\i\y\0\x\e\l\s\4\0\2\1\k\d\i\t\l\6\7\2\9\t\9\p\0\d\o\f\d\h\m\u\1\u\n\3\e\4\j\7\m\1\v\d\b\2\p\y\0\j\2\1\f\e\l\z\b\m\5\m\4\0\c\p\d\k\6\n\8\1\9\v\x\c\5\x\2\s\7\4\k\7\q\z\8\0\4\q\5\8\1\g\s\7\u\4\p\b\2\i\z\w\z\r\f\q\u\c\7\j\z\7\3\y\l\u\g\e\o\a\z\t\f\s\b\4\a\w\4\a\6\v\8\j\y\c\0\c\l\s\7\a\z\t\d\4\9\t\o\6\o\a\w\s\u\f\7\b\y\e\2\8\1\1\l\1\6\y\2\3\w\t\f\0\n\d\d\s\f\u\e\n\j\4\8\z\2\5\3\u\q\q\0\o\b\x\k\y\j\1\8\q\h\k\x\j\7\c\h\i\6\t\u\w\u\1\q\b\x\8\4\k\j\d\v\i\d\s\3\h\x\n\i\2\5\g\s\r\4\3\z\w\a\0\s\f\k\r\b\r\q\y\0\9\s\y\a\c\u\w\n\b\y\x\5\m\s\n\m\i\z\f\p\s\0\b\4\2\x\v\y\6\2\9\b\w\1\5\9\p\v\m\h\c\s\9\a\e\9\y\f\f\4\m\c\v\6\h\s\0\b\n\w\m\a\9\h\1\l\9\p\j\q\p\3\y\c\8\d\p\j\o\2\o\h\r\k\9\7\5\0\u\a\x\a\h\b\7\c\d\h\q\8\q\s\k\9\g\w\7\i\h\2\q\h\3\i\a\b\f\5\j\n\e\n\6\6\d\3\i\y\w\y\w\w\k\o\w\k\q\h\l\g\y\x\5\o\n\1\m\q\e\f\v\7\m\x\f\c\t\b\t\z\o\h\v\s\h\9\d\g\c\8\8\e\9\6\l\0\8\v\u\t\x\z\5\2\f\u\0\i\z\9\k\h\v\d\p\a\2\8\b\j\s\7\z\1\n\7\3\4\f\h\x\p\q\0\b\e\1\f\x\x\j\q\y\3\1\k\m\y\c\k\u\1\y\n\i\p\d\a\q ]] 00:11:17.895 11:22:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:17.895 11:22:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:18.153 [2024-10-07 11:22:13.457753] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:18.153 [2024-10-07 11:22:13.457854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60698 ] 00:11:18.153 [2024-10-07 11:22:13.593385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.412 [2024-10-07 11:22:13.703347] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.412 [2024-10-07 11:22:13.756600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:18.412  [2024-10-07T11:22:14.194Z] Copying: 512/512 [B] (average 500 kBps) 00:11:18.671 00:11:18.671 11:22:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gqgg1tqfm9fdn8fefk38liy0xels4021kditl6729t9p0dofdhmu1un3e4j7m1vdb2py0j21felzbm5m40cpdk6n819vxc5x2s74k7qz804q581gs7u4pb2izwzrfquc7jz73ylugeoaztfsb4aw4a6v8jyc0cls7aztd49to6oawsuf7bye2811l16y23wtf0nddsfuenj48z253uqq0obxkyj18qhkxj7chi6tuwu1qbx84kjdvids3hxni25gsr43zwa0sfkrbrqy09syacuwnbyx5msnmizfps0b42xvy629bw159pvmhcs9ae9yff4mcv6hs0bnwma9h1l9pjqp3yc8dpjo2ohrk9750uaxahb7cdhq8qsk9gw7ih2qh3iabf5jnen66d3iywywwkowkqhlgyx5on1mqefv7mxfctbtzohvsh9dgc88e96l08vutxz52fu0iz9khvdpa28bjs7z1n734fhxpq0be1fxxjqy31kmycku1ynipdaq == \g\q\g\g\1\t\q\f\m\9\f\d\n\8\f\e\f\k\3\8\l\i\y\0\x\e\l\s\4\0\2\1\k\d\i\t\l\6\7\2\9\t\9\p\0\d\o\f\d\h\m\u\1\u\n\3\e\4\j\7\m\1\v\d\b\2\p\y\0\j\2\1\f\e\l\z\b\m\5\m\4\0\c\p\d\k\6\n\8\1\9\v\x\c\5\x\2\s\7\4\k\7\q\z\8\0\4\q\5\8\1\g\s\7\u\4\p\b\2\i\z\w\z\r\f\q\u\c\7\j\z\7\3\y\l\u\g\e\o\a\z\t\f\s\b\4\a\w\4\a\6\v\8\j\y\c\0\c\l\s\7\a\z\t\d\4\9\t\o\6\o\a\w\s\u\f\7\b\y\e\2\8\1\1\l\1\6\y\2\3\w\t\f\0\n\d\d\s\f\u\e\n\j\4\8\z\2\5\3\u\q\q\0\o\b\x\k\y\j\1\8\q\h\k\x\j\7\c\h\i\6\t\u\w\u\1\q\b\x\8\4\k\j\d\v\i\d\s\3\h\x\n\i\2\5\g\s\r\4\3\z\w\a\0\s\f\k\r\b\r\q\y\0\9\s\y\a\c\u\w\n\b\y\x\5\m\s\n\m\i\z\f\p\s\0\b\4\2\x\v\y\6\2\9\b\w\1\5\9\p\v\m\h\c\s\9\a\e\9\y\f\f\4\m\c\v\6\h\s\0\b\n\w\m\a\9\h\1\l\9\p\j\q\p\3\y\c\8\d\p\j\o\2\o\h\r\k\9\7\5\0\u\a\x\a\h\b\7\c\d\h\q\8\q\s\k\9\g\w\7\i\h\2\q\h\3\i\a\b\f\5\j\n\e\n\6\6\d\3\i\y\w\y\w\w\k\o\w\k\q\h\l\g\y\x\5\o\n\1\m\q\e\f\v\7\m\x\f\c\t\b\t\z\o\h\v\s\h\9\d\g\c\8\8\e\9\6\l\0\8\v\u\t\x\z\5\2\f\u\0\i\z\9\k\h\v\d\p\a\2\8\b\j\s\7\z\1\n\7\3\4\f\h\x\p\q\0\b\e\1\f\x\x\j\q\y\3\1\k\m\y\c\k\u\1\y\n\i\p\d\a\q ]] 00:11:18.671 11:22:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:18.671 11:22:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:18.671 [2024-10-07 11:22:14.080647] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:18.671 [2024-10-07 11:22:14.080745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60700 ] 00:11:18.929 [2024-10-07 11:22:14.213032] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.929 [2024-10-07 11:22:14.327344] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.929 [2024-10-07 11:22:14.380757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:18.929  [2024-10-07T11:22:14.711Z] Copying: 512/512 [B] (average 250 kBps) 00:11:19.188 00:11:19.188 11:22:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gqgg1tqfm9fdn8fefk38liy0xels4021kditl6729t9p0dofdhmu1un3e4j7m1vdb2py0j21felzbm5m40cpdk6n819vxc5x2s74k7qz804q581gs7u4pb2izwzrfquc7jz73ylugeoaztfsb4aw4a6v8jyc0cls7aztd49to6oawsuf7bye2811l16y23wtf0nddsfuenj48z253uqq0obxkyj18qhkxj7chi6tuwu1qbx84kjdvids3hxni25gsr43zwa0sfkrbrqy09syacuwnbyx5msnmizfps0b42xvy629bw159pvmhcs9ae9yff4mcv6hs0bnwma9h1l9pjqp3yc8dpjo2ohrk9750uaxahb7cdhq8qsk9gw7ih2qh3iabf5jnen66d3iywywwkowkqhlgyx5on1mqefv7mxfctbtzohvsh9dgc88e96l08vutxz52fu0iz9khvdpa28bjs7z1n734fhxpq0be1fxxjqy31kmycku1ynipdaq == \g\q\g\g\1\t\q\f\m\9\f\d\n\8\f\e\f\k\3\8\l\i\y\0\x\e\l\s\4\0\2\1\k\d\i\t\l\6\7\2\9\t\9\p\0\d\o\f\d\h\m\u\1\u\n\3\e\4\j\7\m\1\v\d\b\2\p\y\0\j\2\1\f\e\l\z\b\m\5\m\4\0\c\p\d\k\6\n\8\1\9\v\x\c\5\x\2\s\7\4\k\7\q\z\8\0\4\q\5\8\1\g\s\7\u\4\p\b\2\i\z\w\z\r\f\q\u\c\7\j\z\7\3\y\l\u\g\e\o\a\z\t\f\s\b\4\a\w\4\a\6\v\8\j\y\c\0\c\l\s\7\a\z\t\d\4\9\t\o\6\o\a\w\s\u\f\7\b\y\e\2\8\1\1\l\1\6\y\2\3\w\t\f\0\n\d\d\s\f\u\e\n\j\4\8\z\2\5\3\u\q\q\0\o\b\x\k\y\j\1\8\q\h\k\x\j\7\c\h\i\6\t\u\w\u\1\q\b\x\8\4\k\j\d\v\i\d\s\3\h\x\n\i\2\5\g\s\r\4\3\z\w\a\0\s\f\k\r\b\r\q\y\0\9\s\y\a\c\u\w\n\b\y\x\5\m\s\n\m\i\z\f\p\s\0\b\4\2\x\v\y\6\2\9\b\w\1\5\9\p\v\m\h\c\s\9\a\e\9\y\f\f\4\m\c\v\6\h\s\0\b\n\w\m\a\9\h\1\l\9\p\j\q\p\3\y\c\8\d\p\j\o\2\o\h\r\k\9\7\5\0\u\a\x\a\h\b\7\c\d\h\q\8\q\s\k\9\g\w\7\i\h\2\q\h\3\i\a\b\f\5\j\n\e\n\6\6\d\3\i\y\w\y\w\w\k\o\w\k\q\h\l\g\y\x\5\o\n\1\m\q\e\f\v\7\m\x\f\c\t\b\t\z\o\h\v\s\h\9\d\g\c\8\8\e\9\6\l\0\8\v\u\t\x\z\5\2\f\u\0\i\z\9\k\h\v\d\p\a\2\8\b\j\s\7\z\1\n\7\3\4\f\h\x\p\q\0\b\e\1\f\x\x\j\q\y\3\1\k\m\y\c\k\u\1\y\n\i\p\d\a\q ]] 00:11:19.188 11:22:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:19.188 11:22:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:19.470 [2024-10-07 11:22:14.716434] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:19.470 [2024-10-07 11:22:14.716542] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60713 ] 00:11:19.470 [2024-10-07 11:22:14.853017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.470 [2024-10-07 11:22:14.966861] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.729 [2024-10-07 11:22:15.021410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:19.729  [2024-10-07T11:22:15.511Z] Copying: 512/512 [B] (average 250 kBps) 00:11:19.988 00:11:19.988 11:22:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gqgg1tqfm9fdn8fefk38liy0xels4021kditl6729t9p0dofdhmu1un3e4j7m1vdb2py0j21felzbm5m40cpdk6n819vxc5x2s74k7qz804q581gs7u4pb2izwzrfquc7jz73ylugeoaztfsb4aw4a6v8jyc0cls7aztd49to6oawsuf7bye2811l16y23wtf0nddsfuenj48z253uqq0obxkyj18qhkxj7chi6tuwu1qbx84kjdvids3hxni25gsr43zwa0sfkrbrqy09syacuwnbyx5msnmizfps0b42xvy629bw159pvmhcs9ae9yff4mcv6hs0bnwma9h1l9pjqp3yc8dpjo2ohrk9750uaxahb7cdhq8qsk9gw7ih2qh3iabf5jnen66d3iywywwkowkqhlgyx5on1mqefv7mxfctbtzohvsh9dgc88e96l08vutxz52fu0iz9khvdpa28bjs7z1n734fhxpq0be1fxxjqy31kmycku1ynipdaq == \g\q\g\g\1\t\q\f\m\9\f\d\n\8\f\e\f\k\3\8\l\i\y\0\x\e\l\s\4\0\2\1\k\d\i\t\l\6\7\2\9\t\9\p\0\d\o\f\d\h\m\u\1\u\n\3\e\4\j\7\m\1\v\d\b\2\p\y\0\j\2\1\f\e\l\z\b\m\5\m\4\0\c\p\d\k\6\n\8\1\9\v\x\c\5\x\2\s\7\4\k\7\q\z\8\0\4\q\5\8\1\g\s\7\u\4\p\b\2\i\z\w\z\r\f\q\u\c\7\j\z\7\3\y\l\u\g\e\o\a\z\t\f\s\b\4\a\w\4\a\6\v\8\j\y\c\0\c\l\s\7\a\z\t\d\4\9\t\o\6\o\a\w\s\u\f\7\b\y\e\2\8\1\1\l\1\6\y\2\3\w\t\f\0\n\d\d\s\f\u\e\n\j\4\8\z\2\5\3\u\q\q\0\o\b\x\k\y\j\1\8\q\h\k\x\j\7\c\h\i\6\t\u\w\u\1\q\b\x\8\4\k\j\d\v\i\d\s\3\h\x\n\i\2\5\g\s\r\4\3\z\w\a\0\s\f\k\r\b\r\q\y\0\9\s\y\a\c\u\w\n\b\y\x\5\m\s\n\m\i\z\f\p\s\0\b\4\2\x\v\y\6\2\9\b\w\1\5\9\p\v\m\h\c\s\9\a\e\9\y\f\f\4\m\c\v\6\h\s\0\b\n\w\m\a\9\h\1\l\9\p\j\q\p\3\y\c\8\d\p\j\o\2\o\h\r\k\9\7\5\0\u\a\x\a\h\b\7\c\d\h\q\8\q\s\k\9\g\w\7\i\h\2\q\h\3\i\a\b\f\5\j\n\e\n\6\6\d\3\i\y\w\y\w\w\k\o\w\k\q\h\l\g\y\x\5\o\n\1\m\q\e\f\v\7\m\x\f\c\t\b\t\z\o\h\v\s\h\9\d\g\c\8\8\e\9\6\l\0\8\v\u\t\x\z\5\2\f\u\0\i\z\9\k\h\v\d\p\a\2\8\b\j\s\7\z\1\n\7\3\4\f\h\x\p\q\0\b\e\1\f\x\x\j\q\y\3\1\k\m\y\c\k\u\1\y\n\i\p\d\a\q ]] 00:11:19.988 11:22:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:19.988 11:22:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:11:19.988 11:22:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:19.988 11:22:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:19.988 11:22:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:19.988 11:22:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:19.988 [2024-10-07 11:22:15.392475] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:19.988 [2024-10-07 11:22:15.392570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60726 ] 00:11:20.250 [2024-10-07 11:22:15.526262] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.250 [2024-10-07 11:22:15.640104] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.250 [2024-10-07 11:22:15.697798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:20.250  [2024-10-07T11:22:16.032Z] Copying: 512/512 [B] (average 500 kBps) 00:11:20.509 00:11:20.509 11:22:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ fpaavpgtc7cdf31cck3b785ez4t8un1xh5ene9355ltz2cr2xcxn4l43a9cnzjd2u8597vue1kodqitdwuv0tyda43r0phamxbhhc9ouvut6ryfrdrqim76y9m0bss6rouopq73yu71o2sd6mbhiob1x1waqcjvhxl49813w91nquao5gvi8cnf2g0c49s9mmhlkcyjfabtqsgon5sk9balcxxciz7u4lesmvrw98d0to26bxys1l4zsc4ctmcb2jax1mx1w8pr6yd1tv6stj6416ejjdgbevi31gp6gpr5twxqu272nlqif38tx7c5z87rwhypjjd1j9ga18vzya56d98ilsdkkhbbfq5o18zdoy13d6i48fykytqav2o7nidrzb5xdil7801xsciwwspz8ldkiff1b428rsx6fz8nsh0tjqlv4agaohb0jklzq2whvbfd9c1y7yw3gjsseh8tn7kqnxqkcl78st3t15zqnega9jn2f5s1jiz71hlpu == \f\p\a\a\v\p\g\t\c\7\c\d\f\3\1\c\c\k\3\b\7\8\5\e\z\4\t\8\u\n\1\x\h\5\e\n\e\9\3\5\5\l\t\z\2\c\r\2\x\c\x\n\4\l\4\3\a\9\c\n\z\j\d\2\u\8\5\9\7\v\u\e\1\k\o\d\q\i\t\d\w\u\v\0\t\y\d\a\4\3\r\0\p\h\a\m\x\b\h\h\c\9\o\u\v\u\t\6\r\y\f\r\d\r\q\i\m\7\6\y\9\m\0\b\s\s\6\r\o\u\o\p\q\7\3\y\u\7\1\o\2\s\d\6\m\b\h\i\o\b\1\x\1\w\a\q\c\j\v\h\x\l\4\9\8\1\3\w\9\1\n\q\u\a\o\5\g\v\i\8\c\n\f\2\g\0\c\4\9\s\9\m\m\h\l\k\c\y\j\f\a\b\t\q\s\g\o\n\5\s\k\9\b\a\l\c\x\x\c\i\z\7\u\4\l\e\s\m\v\r\w\9\8\d\0\t\o\2\6\b\x\y\s\1\l\4\z\s\c\4\c\t\m\c\b\2\j\a\x\1\m\x\1\w\8\p\r\6\y\d\1\t\v\6\s\t\j\6\4\1\6\e\j\j\d\g\b\e\v\i\3\1\g\p\6\g\p\r\5\t\w\x\q\u\2\7\2\n\l\q\i\f\3\8\t\x\7\c\5\z\8\7\r\w\h\y\p\j\j\d\1\j\9\g\a\1\8\v\z\y\a\5\6\d\9\8\i\l\s\d\k\k\h\b\b\f\q\5\o\1\8\z\d\o\y\1\3\d\6\i\4\8\f\y\k\y\t\q\a\v\2\o\7\n\i\d\r\z\b\5\x\d\i\l\7\8\0\1\x\s\c\i\w\w\s\p\z\8\l\d\k\i\f\f\1\b\4\2\8\r\s\x\6\f\z\8\n\s\h\0\t\j\q\l\v\4\a\g\a\o\h\b\0\j\k\l\z\q\2\w\h\v\b\f\d\9\c\1\y\7\y\w\3\g\j\s\s\e\h\8\t\n\7\k\q\n\x\q\k\c\l\7\8\s\t\3\t\1\5\z\q\n\e\g\a\9\j\n\2\f\5\s\1\j\i\z\7\1\h\l\p\u ]] 00:11:20.509 11:22:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:20.509 11:22:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:20.831 [2024-10-07 11:22:16.051120] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:20.831 [2024-10-07 11:22:16.051231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60728 ] 00:11:20.831 [2024-10-07 11:22:16.187940] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.831 [2024-10-07 11:22:16.304359] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.090 [2024-10-07 11:22:16.359209] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:21.090  [2024-10-07T11:22:16.871Z] Copying: 512/512 [B] (average 500 kBps) 00:11:21.348 00:11:21.348 11:22:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ fpaavpgtc7cdf31cck3b785ez4t8un1xh5ene9355ltz2cr2xcxn4l43a9cnzjd2u8597vue1kodqitdwuv0tyda43r0phamxbhhc9ouvut6ryfrdrqim76y9m0bss6rouopq73yu71o2sd6mbhiob1x1waqcjvhxl49813w91nquao5gvi8cnf2g0c49s9mmhlkcyjfabtqsgon5sk9balcxxciz7u4lesmvrw98d0to26bxys1l4zsc4ctmcb2jax1mx1w8pr6yd1tv6stj6416ejjdgbevi31gp6gpr5twxqu272nlqif38tx7c5z87rwhypjjd1j9ga18vzya56d98ilsdkkhbbfq5o18zdoy13d6i48fykytqav2o7nidrzb5xdil7801xsciwwspz8ldkiff1b428rsx6fz8nsh0tjqlv4agaohb0jklzq2whvbfd9c1y7yw3gjsseh8tn7kqnxqkcl78st3t15zqnega9jn2f5s1jiz71hlpu == \f\p\a\a\v\p\g\t\c\7\c\d\f\3\1\c\c\k\3\b\7\8\5\e\z\4\t\8\u\n\1\x\h\5\e\n\e\9\3\5\5\l\t\z\2\c\r\2\x\c\x\n\4\l\4\3\a\9\c\n\z\j\d\2\u\8\5\9\7\v\u\e\1\k\o\d\q\i\t\d\w\u\v\0\t\y\d\a\4\3\r\0\p\h\a\m\x\b\h\h\c\9\o\u\v\u\t\6\r\y\f\r\d\r\q\i\m\7\6\y\9\m\0\b\s\s\6\r\o\u\o\p\q\7\3\y\u\7\1\o\2\s\d\6\m\b\h\i\o\b\1\x\1\w\a\q\c\j\v\h\x\l\4\9\8\1\3\w\9\1\n\q\u\a\o\5\g\v\i\8\c\n\f\2\g\0\c\4\9\s\9\m\m\h\l\k\c\y\j\f\a\b\t\q\s\g\o\n\5\s\k\9\b\a\l\c\x\x\c\i\z\7\u\4\l\e\s\m\v\r\w\9\8\d\0\t\o\2\6\b\x\y\s\1\l\4\z\s\c\4\c\t\m\c\b\2\j\a\x\1\m\x\1\w\8\p\r\6\y\d\1\t\v\6\s\t\j\6\4\1\6\e\j\j\d\g\b\e\v\i\3\1\g\p\6\g\p\r\5\t\w\x\q\u\2\7\2\n\l\q\i\f\3\8\t\x\7\c\5\z\8\7\r\w\h\y\p\j\j\d\1\j\9\g\a\1\8\v\z\y\a\5\6\d\9\8\i\l\s\d\k\k\h\b\b\f\q\5\o\1\8\z\d\o\y\1\3\d\6\i\4\8\f\y\k\y\t\q\a\v\2\o\7\n\i\d\r\z\b\5\x\d\i\l\7\8\0\1\x\s\c\i\w\w\s\p\z\8\l\d\k\i\f\f\1\b\4\2\8\r\s\x\6\f\z\8\n\s\h\0\t\j\q\l\v\4\a\g\a\o\h\b\0\j\k\l\z\q\2\w\h\v\b\f\d\9\c\1\y\7\y\w\3\g\j\s\s\e\h\8\t\n\7\k\q\n\x\q\k\c\l\7\8\s\t\3\t\1\5\z\q\n\e\g\a\9\j\n\2\f\5\s\1\j\i\z\7\1\h\l\p\u ]] 00:11:21.348 11:22:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:21.348 11:22:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:21.348 [2024-10-07 11:22:16.718400] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:21.348 [2024-10-07 11:22:16.718547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60741 ] 00:11:21.348 [2024-10-07 11:22:16.856989] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.607 [2024-10-07 11:22:16.955353] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.607 [2024-10-07 11:22:17.009886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:21.607  [2024-10-07T11:22:17.389Z] Copying: 512/512 [B] (average 500 kBps) 00:11:21.866 00:11:21.866 11:22:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ fpaavpgtc7cdf31cck3b785ez4t8un1xh5ene9355ltz2cr2xcxn4l43a9cnzjd2u8597vue1kodqitdwuv0tyda43r0phamxbhhc9ouvut6ryfrdrqim76y9m0bss6rouopq73yu71o2sd6mbhiob1x1waqcjvhxl49813w91nquao5gvi8cnf2g0c49s9mmhlkcyjfabtqsgon5sk9balcxxciz7u4lesmvrw98d0to26bxys1l4zsc4ctmcb2jax1mx1w8pr6yd1tv6stj6416ejjdgbevi31gp6gpr5twxqu272nlqif38tx7c5z87rwhypjjd1j9ga18vzya56d98ilsdkkhbbfq5o18zdoy13d6i48fykytqav2o7nidrzb5xdil7801xsciwwspz8ldkiff1b428rsx6fz8nsh0tjqlv4agaohb0jklzq2whvbfd9c1y7yw3gjsseh8tn7kqnxqkcl78st3t15zqnega9jn2f5s1jiz71hlpu == \f\p\a\a\v\p\g\t\c\7\c\d\f\3\1\c\c\k\3\b\7\8\5\e\z\4\t\8\u\n\1\x\h\5\e\n\e\9\3\5\5\l\t\z\2\c\r\2\x\c\x\n\4\l\4\3\a\9\c\n\z\j\d\2\u\8\5\9\7\v\u\e\1\k\o\d\q\i\t\d\w\u\v\0\t\y\d\a\4\3\r\0\p\h\a\m\x\b\h\h\c\9\o\u\v\u\t\6\r\y\f\r\d\r\q\i\m\7\6\y\9\m\0\b\s\s\6\r\o\u\o\p\q\7\3\y\u\7\1\o\2\s\d\6\m\b\h\i\o\b\1\x\1\w\a\q\c\j\v\h\x\l\4\9\8\1\3\w\9\1\n\q\u\a\o\5\g\v\i\8\c\n\f\2\g\0\c\4\9\s\9\m\m\h\l\k\c\y\j\f\a\b\t\q\s\g\o\n\5\s\k\9\b\a\l\c\x\x\c\i\z\7\u\4\l\e\s\m\v\r\w\9\8\d\0\t\o\2\6\b\x\y\s\1\l\4\z\s\c\4\c\t\m\c\b\2\j\a\x\1\m\x\1\w\8\p\r\6\y\d\1\t\v\6\s\t\j\6\4\1\6\e\j\j\d\g\b\e\v\i\3\1\g\p\6\g\p\r\5\t\w\x\q\u\2\7\2\n\l\q\i\f\3\8\t\x\7\c\5\z\8\7\r\w\h\y\p\j\j\d\1\j\9\g\a\1\8\v\z\y\a\5\6\d\9\8\i\l\s\d\k\k\h\b\b\f\q\5\o\1\8\z\d\o\y\1\3\d\6\i\4\8\f\y\k\y\t\q\a\v\2\o\7\n\i\d\r\z\b\5\x\d\i\l\7\8\0\1\x\s\c\i\w\w\s\p\z\8\l\d\k\i\f\f\1\b\4\2\8\r\s\x\6\f\z\8\n\s\h\0\t\j\q\l\v\4\a\g\a\o\h\b\0\j\k\l\z\q\2\w\h\v\b\f\d\9\c\1\y\7\y\w\3\g\j\s\s\e\h\8\t\n\7\k\q\n\x\q\k\c\l\7\8\s\t\3\t\1\5\z\q\n\e\g\a\9\j\n\2\f\5\s\1\j\i\z\7\1\h\l\p\u ]] 00:11:21.866 11:22:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:21.866 11:22:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:21.866 [2024-10-07 11:22:17.348361] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:21.866 [2024-10-07 11:22:17.348458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60749 ] 00:11:22.124 [2024-10-07 11:22:17.485847] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.125 [2024-10-07 11:22:17.607667] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.383 [2024-10-07 11:22:17.662963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:22.383  [2024-10-07T11:22:18.164Z] Copying: 512/512 [B] (average 500 kBps) 00:11:22.641 00:11:22.641 11:22:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ fpaavpgtc7cdf31cck3b785ez4t8un1xh5ene9355ltz2cr2xcxn4l43a9cnzjd2u8597vue1kodqitdwuv0tyda43r0phamxbhhc9ouvut6ryfrdrqim76y9m0bss6rouopq73yu71o2sd6mbhiob1x1waqcjvhxl49813w91nquao5gvi8cnf2g0c49s9mmhlkcyjfabtqsgon5sk9balcxxciz7u4lesmvrw98d0to26bxys1l4zsc4ctmcb2jax1mx1w8pr6yd1tv6stj6416ejjdgbevi31gp6gpr5twxqu272nlqif38tx7c5z87rwhypjjd1j9ga18vzya56d98ilsdkkhbbfq5o18zdoy13d6i48fykytqav2o7nidrzb5xdil7801xsciwwspz8ldkiff1b428rsx6fz8nsh0tjqlv4agaohb0jklzq2whvbfd9c1y7yw3gjsseh8tn7kqnxqkcl78st3t15zqnega9jn2f5s1jiz71hlpu == \f\p\a\a\v\p\g\t\c\7\c\d\f\3\1\c\c\k\3\b\7\8\5\e\z\4\t\8\u\n\1\x\h\5\e\n\e\9\3\5\5\l\t\z\2\c\r\2\x\c\x\n\4\l\4\3\a\9\c\n\z\j\d\2\u\8\5\9\7\v\u\e\1\k\o\d\q\i\t\d\w\u\v\0\t\y\d\a\4\3\r\0\p\h\a\m\x\b\h\h\c\9\o\u\v\u\t\6\r\y\f\r\d\r\q\i\m\7\6\y\9\m\0\b\s\s\6\r\o\u\o\p\q\7\3\y\u\7\1\o\2\s\d\6\m\b\h\i\o\b\1\x\1\w\a\q\c\j\v\h\x\l\4\9\8\1\3\w\9\1\n\q\u\a\o\5\g\v\i\8\c\n\f\2\g\0\c\4\9\s\9\m\m\h\l\k\c\y\j\f\a\b\t\q\s\g\o\n\5\s\k\9\b\a\l\c\x\x\c\i\z\7\u\4\l\e\s\m\v\r\w\9\8\d\0\t\o\2\6\b\x\y\s\1\l\4\z\s\c\4\c\t\m\c\b\2\j\a\x\1\m\x\1\w\8\p\r\6\y\d\1\t\v\6\s\t\j\6\4\1\6\e\j\j\d\g\b\e\v\i\3\1\g\p\6\g\p\r\5\t\w\x\q\u\2\7\2\n\l\q\i\f\3\8\t\x\7\c\5\z\8\7\r\w\h\y\p\j\j\d\1\j\9\g\a\1\8\v\z\y\a\5\6\d\9\8\i\l\s\d\k\k\h\b\b\f\q\5\o\1\8\z\d\o\y\1\3\d\6\i\4\8\f\y\k\y\t\q\a\v\2\o\7\n\i\d\r\z\b\5\x\d\i\l\7\8\0\1\x\s\c\i\w\w\s\p\z\8\l\d\k\i\f\f\1\b\4\2\8\r\s\x\6\f\z\8\n\s\h\0\t\j\q\l\v\4\a\g\a\o\h\b\0\j\k\l\z\q\2\w\h\v\b\f\d\9\c\1\y\7\y\w\3\g\j\s\s\e\h\8\t\n\7\k\q\n\x\q\k\c\l\7\8\s\t\3\t\1\5\z\q\n\e\g\a\9\j\n\2\f\5\s\1\j\i\z\7\1\h\l\p\u ]] 00:11:22.641 00:11:22.641 real 0m5.152s 00:11:22.641 user 0m3.005s 00:11:22.641 sys 0m1.175s 00:11:22.641 ************************************ 00:11:22.641 END TEST dd_flags_misc_forced_aio 00:11:22.641 ************************************ 00:11:22.641 11:22:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.641 11:22:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:22.641 11:22:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:11:22.641 11:22:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:22.641 11:22:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:22.641 00:11:22.641 real 0m23.645s 00:11:22.641 user 0m12.526s 00:11:22.641 sys 0m7.179s 00:11:22.641 11:22:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.641 11:22:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:22.641 ************************************ 00:11:22.641 END TEST spdk_dd_posix 00:11:22.641 ************************************ 00:11:22.641 11:22:18 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:11:22.641 11:22:18 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:22.641 11:22:18 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.641 11:22:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:22.641 ************************************ 00:11:22.641 START TEST spdk_dd_malloc 00:11:22.641 ************************************ 00:11:22.641 11:22:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:11:22.641 * Looking for test storage... 00:11:22.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:22.641 11:22:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:22.641 11:22:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lcov --version 00:11:22.641 11:22:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:22.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.899 --rc genhtml_branch_coverage=1 00:11:22.899 --rc genhtml_function_coverage=1 00:11:22.899 --rc genhtml_legend=1 00:11:22.899 --rc geninfo_all_blocks=1 00:11:22.899 --rc geninfo_unexecuted_blocks=1 00:11:22.899 00:11:22.899 ' 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:22.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.899 --rc genhtml_branch_coverage=1 00:11:22.899 --rc genhtml_function_coverage=1 00:11:22.899 --rc genhtml_legend=1 00:11:22.899 --rc geninfo_all_blocks=1 00:11:22.899 --rc geninfo_unexecuted_blocks=1 00:11:22.899 00:11:22.899 ' 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:22.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.899 --rc genhtml_branch_coverage=1 00:11:22.899 --rc genhtml_function_coverage=1 00:11:22.899 --rc genhtml_legend=1 00:11:22.899 --rc geninfo_all_blocks=1 00:11:22.899 --rc geninfo_unexecuted_blocks=1 00:11:22.899 00:11:22.899 ' 00:11:22.899 11:22:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:22.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.899 --rc genhtml_branch_coverage=1 00:11:22.899 --rc genhtml_function_coverage=1 00:11:22.899 --rc genhtml_legend=1 00:11:22.899 --rc geninfo_all_blocks=1 00:11:22.899 --rc geninfo_unexecuted_blocks=1 00:11:22.900 00:11:22.900 ' 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:11:22.900 ************************************ 00:11:22.900 START TEST dd_malloc_copy 00:11:22.900 ************************************ 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:22.900 11:22:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:22.900 [2024-10-07 11:22:18.279686] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:22.900 [2024-10-07 11:22:18.279808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60825 ] 00:11:22.900 { 00:11:22.900 "subsystems": [ 00:11:22.900 { 00:11:22.900 "subsystem": "bdev", 00:11:22.900 "config": [ 00:11:22.900 { 00:11:22.900 "params": { 00:11:22.900 "block_size": 512, 00:11:22.900 "num_blocks": 1048576, 00:11:22.900 "name": "malloc0" 00:11:22.900 }, 00:11:22.900 "method": "bdev_malloc_create" 00:11:22.900 }, 00:11:22.900 { 00:11:22.900 "params": { 00:11:22.900 "block_size": 512, 00:11:22.900 "num_blocks": 1048576, 00:11:22.900 "name": "malloc1" 00:11:22.900 }, 00:11:22.900 "method": "bdev_malloc_create" 00:11:22.900 }, 00:11:22.900 { 00:11:22.900 "method": "bdev_wait_for_examine" 00:11:22.900 } 00:11:22.900 ] 00:11:22.900 } 00:11:22.900 ] 00:11:22.900 } 00:11:22.900 [2024-10-07 11:22:18.417434] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.161 [2024-10-07 11:22:18.544242] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.161 [2024-10-07 11:22:18.599937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:24.536  [2024-10-07T11:22:20.994Z] Copying: 198/512 [MB] (198 MBps) [2024-10-07T11:22:21.562Z] Copying: 395/512 [MB] (196 MBps) [2024-10-07T11:22:22.213Z] Copying: 512/512 [MB] (average 197 MBps) 00:11:26.690 00:11:26.690 11:22:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:11:26.690 11:22:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:11:26.690 11:22:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:26.690 11:22:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:26.690 [2024-10-07 11:22:22.207671] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:26.691 [2024-10-07 11:22:22.207780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60878 ] 00:11:26.691 { 00:11:26.691 "subsystems": [ 00:11:26.691 { 00:11:26.691 "subsystem": "bdev", 00:11:26.691 "config": [ 00:11:26.691 { 00:11:26.691 "params": { 00:11:26.691 "block_size": 512, 00:11:26.691 "num_blocks": 1048576, 00:11:26.691 "name": "malloc0" 00:11:26.691 }, 00:11:26.691 "method": "bdev_malloc_create" 00:11:26.691 }, 00:11:26.691 { 00:11:26.691 "params": { 00:11:26.691 "block_size": 512, 00:11:26.691 "num_blocks": 1048576, 00:11:26.691 "name": "malloc1" 00:11:26.691 }, 00:11:26.691 "method": "bdev_malloc_create" 00:11:26.691 }, 00:11:26.691 { 00:11:26.691 "method": "bdev_wait_for_examine" 00:11:26.691 } 00:11:26.691 ] 00:11:26.691 } 00:11:26.691 ] 00:11:26.691 } 00:11:26.949 [2024-10-07 11:22:22.347286] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.949 [2024-10-07 11:22:22.468157] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.207 [2024-10-07 11:22:22.523284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:28.583  [2024-10-07T11:22:25.040Z] Copying: 200/512 [MB] (200 MBps) [2024-10-07T11:22:25.606Z] Copying: 400/512 [MB] (199 MBps) [2024-10-07T11:22:26.173Z] Copying: 512/512 [MB] (average 200 MBps) 00:11:30.650 00:11:30.650 00:11:30.650 real 0m7.807s 00:11:30.650 user 0m6.789s 00:11:30.650 sys 0m0.860s 00:11:30.650 11:22:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:30.650 11:22:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:30.650 ************************************ 00:11:30.650 END TEST dd_malloc_copy 00:11:30.650 ************************************ 00:11:30.650 00:11:30.650 real 0m8.046s 00:11:30.650 user 0m6.929s 00:11:30.650 sys 0m0.966s 00:11:30.650 11:22:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:30.650 11:22:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:11:30.650 ************************************ 00:11:30.650 END TEST spdk_dd_malloc 00:11:30.650 ************************************ 00:11:30.650 11:22:26 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:30.650 11:22:26 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:30.650 11:22:26 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.650 11:22:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:30.650 ************************************ 00:11:30.650 START TEST spdk_dd_bdev_to_bdev 00:11:30.650 ************************************ 00:11:30.650 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:30.909 * Looking for test storage... 00:11:30.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lcov --version 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.909 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:30.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.910 --rc genhtml_branch_coverage=1 00:11:30.910 --rc genhtml_function_coverage=1 00:11:30.910 --rc genhtml_legend=1 00:11:30.910 --rc geninfo_all_blocks=1 00:11:30.910 --rc geninfo_unexecuted_blocks=1 00:11:30.910 00:11:30.910 ' 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:30.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.910 --rc genhtml_branch_coverage=1 00:11:30.910 --rc genhtml_function_coverage=1 00:11:30.910 --rc genhtml_legend=1 00:11:30.910 --rc geninfo_all_blocks=1 00:11:30.910 --rc geninfo_unexecuted_blocks=1 00:11:30.910 00:11:30.910 ' 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:30.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.910 --rc genhtml_branch_coverage=1 00:11:30.910 --rc genhtml_function_coverage=1 00:11:30.910 --rc genhtml_legend=1 00:11:30.910 --rc geninfo_all_blocks=1 00:11:30.910 --rc geninfo_unexecuted_blocks=1 00:11:30.910 00:11:30.910 ' 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:30.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.910 --rc genhtml_branch_coverage=1 00:11:30.910 --rc genhtml_function_coverage=1 00:11:30.910 --rc genhtml_legend=1 00:11:30.910 --rc geninfo_all_blocks=1 00:11:30.910 --rc geninfo_unexecuted_blocks=1 00:11:30.910 00:11:30.910 ' 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:30.910 ************************************ 00:11:30.910 START TEST dd_inflate_file 00:11:30.910 ************************************ 00:11:30.910 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:30.910 [2024-10-07 11:22:26.367625] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:30.910 [2024-10-07 11:22:26.367734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60998 ] 00:11:31.170 [2024-10-07 11:22:26.507912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.170 [2024-10-07 11:22:26.625041] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.170 [2024-10-07 11:22:26.680635] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:31.429  [2024-10-07T11:22:27.211Z] Copying: 64/64 [MB] (average 1560 MBps) 00:11:31.688 00:11:31.688 00:11:31.688 real 0m0.663s 00:11:31.688 user 0m0.403s 00:11:31.688 sys 0m0.315s 00:11:31.688 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.688 ************************************ 00:11:31.688 11:22:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:11:31.688 END TEST dd_inflate_file 00:11:31.688 ************************************ 00:11:31.688 11:22:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:11:31.688 11:22:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:11:31.688 11:22:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:11:31.688 11:22:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:31.688 11:22:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:31.688 11:22:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:31.688 11:22:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:11:31.688 11:22:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.688 11:22:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:31.688 ************************************ 00:11:31.688 START TEST dd_copy_to_out_bdev 00:11:31.688 ************************************ 00:11:31.688 11:22:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:31.688 { 00:11:31.688 "subsystems": [ 00:11:31.688 { 00:11:31.688 "subsystem": "bdev", 00:11:31.688 "config": [ 00:11:31.688 { 00:11:31.688 "params": { 00:11:31.688 "trtype": "pcie", 00:11:31.688 "traddr": "0000:00:10.0", 00:11:31.688 "name": "Nvme0" 00:11:31.688 }, 00:11:31.688 "method": "bdev_nvme_attach_controller" 00:11:31.688 }, 00:11:31.688 { 00:11:31.688 "params": { 00:11:31.688 "trtype": "pcie", 00:11:31.688 "traddr": "0000:00:11.0", 00:11:31.688 "name": "Nvme1" 00:11:31.688 }, 00:11:31.688 "method": "bdev_nvme_attach_controller" 00:11:31.688 }, 00:11:31.688 { 00:11:31.688 "method": "bdev_wait_for_examine" 00:11:31.688 } 00:11:31.688 ] 00:11:31.688 } 00:11:31.688 ] 00:11:31.688 } 00:11:31.688 [2024-10-07 11:22:27.089793] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:31.688 [2024-10-07 11:22:27.089889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61039 ] 00:11:31.947 [2024-10-07 11:22:27.231041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.947 [2024-10-07 11:22:27.347728] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.947 [2024-10-07 11:22:27.406911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:33.351  [2024-10-07T11:22:28.874Z] Copying: 63/64 [MB] (63 MBps) [2024-10-07T11:22:29.132Z] Copying: 64/64 [MB] (average 63 MBps) 00:11:33.609 00:11:33.609 00:11:33.609 real 0m1.846s 00:11:33.609 user 0m1.606s 00:11:33.609 sys 0m1.355s 00:11:33.609 11:22:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:33.609 11:22:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:33.609 ************************************ 00:11:33.609 END TEST dd_copy_to_out_bdev 00:11:33.609 ************************************ 00:11:33.609 11:22:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:11:33.609 11:22:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:11:33.609 11:22:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:33.609 11:22:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:33.609 11:22:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:33.609 ************************************ 00:11:33.609 START TEST dd_offset_magic 00:11:33.609 ************************************ 00:11:33.609 11:22:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:11:33.609 11:22:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:11:33.609 11:22:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:11:33.609 11:22:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:11:33.609 11:22:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:33.609 11:22:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:11:33.609 11:22:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:33.609 11:22:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:33.609 11:22:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:33.609 [2024-10-07 11:22:28.976389] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:33.609 [2024-10-07 11:22:28.976472] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61077 ] 00:11:33.609 { 00:11:33.609 "subsystems": [ 00:11:33.609 { 00:11:33.609 "subsystem": "bdev", 00:11:33.609 "config": [ 00:11:33.609 { 00:11:33.609 "params": { 00:11:33.609 "trtype": "pcie", 00:11:33.609 "traddr": "0000:00:10.0", 00:11:33.609 "name": "Nvme0" 00:11:33.609 }, 00:11:33.609 "method": "bdev_nvme_attach_controller" 00:11:33.609 }, 00:11:33.609 { 00:11:33.609 "params": { 00:11:33.609 "trtype": "pcie", 00:11:33.609 "traddr": "0000:00:11.0", 00:11:33.609 "name": "Nvme1" 00:11:33.609 }, 00:11:33.609 "method": "bdev_nvme_attach_controller" 00:11:33.609 }, 00:11:33.609 { 00:11:33.609 "method": "bdev_wait_for_examine" 00:11:33.609 } 00:11:33.609 ] 00:11:33.609 } 00:11:33.609 ] 00:11:33.609 } 00:11:33.609 [2024-10-07 11:22:29.109296] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.867 [2024-10-07 11:22:29.240414] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.867 [2024-10-07 11:22:29.295900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:34.124  [2024-10-07T11:22:29.905Z] Copying: 65/65 [MB] (average 1031 MBps) 00:11:34.382 00:11:34.382 11:22:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:11:34.382 11:22:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:34.382 11:22:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:34.382 11:22:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:34.382 [2024-10-07 11:22:29.862965] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:34.382 { 00:11:34.382 "subsystems": [ 00:11:34.382 { 00:11:34.382 "subsystem": "bdev", 00:11:34.382 "config": [ 00:11:34.382 { 00:11:34.382 "params": { 00:11:34.382 "trtype": "pcie", 00:11:34.382 "traddr": "0000:00:10.0", 00:11:34.382 "name": "Nvme0" 00:11:34.382 }, 00:11:34.382 "method": "bdev_nvme_attach_controller" 00:11:34.382 }, 00:11:34.382 { 00:11:34.382 "params": { 00:11:34.382 "trtype": "pcie", 00:11:34.382 "traddr": "0000:00:11.0", 00:11:34.382 "name": "Nvme1" 00:11:34.382 }, 00:11:34.382 "method": "bdev_nvme_attach_controller" 00:11:34.382 }, 00:11:34.382 { 00:11:34.382 "method": "bdev_wait_for_examine" 00:11:34.382 } 00:11:34.382 ] 00:11:34.382 } 00:11:34.382 ] 00:11:34.382 } 00:11:34.382 [2024-10-07 11:22:29.863107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61097 ] 00:11:34.640 [2024-10-07 11:22:30.007478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.640 [2024-10-07 11:22:30.119231] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.898 [2024-10-07 11:22:30.173361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:34.898  [2024-10-07T11:22:30.679Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:11:35.156 00:11:35.156 11:22:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:35.156 11:22:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:35.156 11:22:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:35.156 11:22:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:35.156 11:22:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:11:35.157 11:22:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:35.157 11:22:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:35.157 [2024-10-07 11:22:30.636016] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:35.157 [2024-10-07 11:22:30.636144] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61113 ] 00:11:35.157 { 00:11:35.157 "subsystems": [ 00:11:35.157 { 00:11:35.157 "subsystem": "bdev", 00:11:35.157 "config": [ 00:11:35.157 { 00:11:35.157 "params": { 00:11:35.157 "trtype": "pcie", 00:11:35.157 "traddr": "0000:00:10.0", 00:11:35.157 "name": "Nvme0" 00:11:35.157 }, 00:11:35.157 "method": "bdev_nvme_attach_controller" 00:11:35.157 }, 00:11:35.157 { 00:11:35.157 "params": { 00:11:35.157 "trtype": "pcie", 00:11:35.157 "traddr": "0000:00:11.0", 00:11:35.157 "name": "Nvme1" 00:11:35.157 }, 00:11:35.157 "method": "bdev_nvme_attach_controller" 00:11:35.157 }, 00:11:35.157 { 00:11:35.157 "method": "bdev_wait_for_examine" 00:11:35.157 } 00:11:35.157 ] 00:11:35.157 } 00:11:35.157 ] 00:11:35.157 } 00:11:35.415 [2024-10-07 11:22:30.775318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.415 [2024-10-07 11:22:30.883074] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.415 [2024-10-07 11:22:30.937975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:35.737  [2024-10-07T11:22:31.519Z] Copying: 65/65 [MB] (average 1120 MBps) 00:11:35.996 00:11:35.996 11:22:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:11:35.996 11:22:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:35.996 11:22:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:35.996 11:22:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:36.254 { 00:11:36.254 "subsystems": [ 00:11:36.254 { 00:11:36.254 "subsystem": "bdev", 00:11:36.254 "config": [ 00:11:36.254 { 00:11:36.254 "params": { 00:11:36.254 "trtype": "pcie", 00:11:36.254 "traddr": "0000:00:10.0", 00:11:36.255 "name": "Nvme0" 00:11:36.255 }, 00:11:36.255 "method": "bdev_nvme_attach_controller" 00:11:36.255 }, 00:11:36.255 { 00:11:36.255 "params": { 00:11:36.255 "trtype": "pcie", 00:11:36.255 "traddr": "0000:00:11.0", 00:11:36.255 "name": "Nvme1" 00:11:36.255 }, 00:11:36.255 "method": "bdev_nvme_attach_controller" 00:11:36.255 }, 00:11:36.255 { 00:11:36.255 "method": "bdev_wait_for_examine" 00:11:36.255 } 00:11:36.255 ] 00:11:36.255 } 00:11:36.255 ] 00:11:36.255 } 00:11:36.255 [2024-10-07 11:22:31.523952] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:36.255 [2024-10-07 11:22:31.524056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61133 ] 00:11:36.255 [2024-10-07 11:22:31.661148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.255 [2024-10-07 11:22:31.772819] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.513 [2024-10-07 11:22:31.828621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:36.513  [2024-10-07T11:22:32.295Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:11:36.772 00:11:36.772 11:22:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:36.772 11:22:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:36.772 00:11:36.772 real 0m3.314s 00:11:36.772 user 0m2.448s 00:11:36.772 sys 0m0.945s 00:11:36.772 11:22:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:36.772 ************************************ 00:11:36.772 END TEST dd_offset_magic 00:11:36.772 ************************************ 00:11:36.772 11:22:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:36.772 11:22:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:11:36.772 11:22:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:11:36.772 11:22:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:36.772 11:22:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:36.772 11:22:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:36.772 11:22:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:36.772 11:22:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:36.772 11:22:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:11:36.772 11:22:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:36.772 11:22:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:36.772 11:22:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:37.031 [2024-10-07 11:22:32.341189] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:37.031 [2024-10-07 11:22:32.341344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61170 ] 00:11:37.031 { 00:11:37.032 "subsystems": [ 00:11:37.032 { 00:11:37.032 "subsystem": "bdev", 00:11:37.032 "config": [ 00:11:37.032 { 00:11:37.032 "params": { 00:11:37.032 "trtype": "pcie", 00:11:37.032 "traddr": "0000:00:10.0", 00:11:37.032 "name": "Nvme0" 00:11:37.032 }, 00:11:37.032 "method": "bdev_nvme_attach_controller" 00:11:37.032 }, 00:11:37.032 { 00:11:37.032 "params": { 00:11:37.032 "trtype": "pcie", 00:11:37.032 "traddr": "0000:00:11.0", 00:11:37.032 "name": "Nvme1" 00:11:37.032 }, 00:11:37.032 "method": "bdev_nvme_attach_controller" 00:11:37.032 }, 00:11:37.032 { 00:11:37.032 "method": "bdev_wait_for_examine" 00:11:37.032 } 00:11:37.032 ] 00:11:37.032 } 00:11:37.032 ] 00:11:37.032 } 00:11:37.032 [2024-10-07 11:22:32.480619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.290 [2024-10-07 11:22:32.608522] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.290 [2024-10-07 11:22:32.665718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:37.549  [2024-10-07T11:22:33.331Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:11:37.808 00:11:37.808 11:22:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:11:37.808 11:22:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:11:37.808 11:22:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:37.808 11:22:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:37.808 11:22:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:37.808 11:22:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:37.808 11:22:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:37.808 11:22:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:11:37.808 11:22:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:37.808 11:22:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:37.808 { 00:11:37.808 "subsystems": [ 00:11:37.808 { 00:11:37.808 "subsystem": "bdev", 00:11:37.808 "config": [ 00:11:37.808 { 00:11:37.808 "params": { 00:11:37.808 "trtype": "pcie", 00:11:37.808 "traddr": "0000:00:10.0", 00:11:37.808 "name": "Nvme0" 00:11:37.808 }, 00:11:37.808 "method": "bdev_nvme_attach_controller" 00:11:37.808 }, 00:11:37.808 { 00:11:37.808 "params": { 00:11:37.808 "trtype": "pcie", 00:11:37.808 "traddr": "0000:00:11.0", 00:11:37.808 "name": "Nvme1" 00:11:37.808 }, 00:11:37.808 "method": "bdev_nvme_attach_controller" 00:11:37.808 }, 00:11:37.808 { 00:11:37.808 "method": "bdev_wait_for_examine" 00:11:37.808 } 00:11:37.808 ] 00:11:37.808 } 00:11:37.808 ] 00:11:37.808 } 00:11:37.808 [2024-10-07 11:22:33.138312] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:37.808 [2024-10-07 11:22:33.138465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61186 ] 00:11:37.808 [2024-10-07 11:22:33.272356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.066 [2024-10-07 11:22:33.392453] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.066 [2024-10-07 11:22:33.446883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:38.326  [2024-10-07T11:22:34.107Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:11:38.584 00:11:38.584 11:22:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:11:38.584 ************************************ 00:11:38.584 END TEST spdk_dd_bdev_to_bdev 00:11:38.584 ************************************ 00:11:38.584 00:11:38.584 real 0m7.766s 00:11:38.584 user 0m5.754s 00:11:38.584 sys 0m3.368s 00:11:38.584 11:22:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.584 11:22:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:38.584 11:22:33 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:11:38.584 11:22:33 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:38.584 11:22:33 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:38.584 11:22:33 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.584 11:22:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:38.584 ************************************ 00:11:38.584 START TEST spdk_dd_uring 00:11:38.584 ************************************ 00:11:38.584 11:22:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:38.584 * Looking for test storage... 00:11:38.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lcov --version 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.584 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:38.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.842 --rc genhtml_branch_coverage=1 00:11:38.842 --rc genhtml_function_coverage=1 00:11:38.842 --rc genhtml_legend=1 00:11:38.842 --rc geninfo_all_blocks=1 00:11:38.842 --rc geninfo_unexecuted_blocks=1 00:11:38.842 00:11:38.842 ' 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:38.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.842 --rc genhtml_branch_coverage=1 00:11:38.842 --rc genhtml_function_coverage=1 00:11:38.842 --rc genhtml_legend=1 00:11:38.842 --rc geninfo_all_blocks=1 00:11:38.842 --rc geninfo_unexecuted_blocks=1 00:11:38.842 00:11:38.842 ' 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:38.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.842 --rc genhtml_branch_coverage=1 00:11:38.842 --rc genhtml_function_coverage=1 00:11:38.842 --rc genhtml_legend=1 00:11:38.842 --rc geninfo_all_blocks=1 00:11:38.842 --rc geninfo_unexecuted_blocks=1 00:11:38.842 00:11:38.842 ' 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:38.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.842 --rc genhtml_branch_coverage=1 00:11:38.842 --rc genhtml_function_coverage=1 00:11:38.842 --rc genhtml_legend=1 00:11:38.842 --rc geninfo_all_blocks=1 00:11:38.842 --rc geninfo_unexecuted_blocks=1 00:11:38.842 00:11:38.842 ' 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.842 11:22:34 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:11:38.843 ************************************ 00:11:38.843 START TEST dd_uring_copy 00:11:38.843 ************************************ 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=emtqd0nwq055veypi0m9vt0v0mw5udn2abe57q4lmckvm06wfp3tkcojgjgn53mxq27jthraocgvaz7u5e2kc58ecdi3n1eeveaygflwo8byfexnqakgcjl76fioorq4567sivkr629ti2n013avpa5zphn7ckkw7m3k9actroembxv1rv30kfgy34k6voxufrtq5fx0gj62my2xu4web2d2p2ieesyycibkzgwax7nk3533x017xakbh3znedui24mlpje6k2yl0ovpi5s8afjlygf2q7zv0cq36qex1kk6nhcqftwpe4d2wd7quby08gadedwpzd823xim424oc16jdeh502ab8rn819g6bo6bnx5cp2kcvvc0pf4h055kurwa9mxujq11wsob6f9xt3kbxefyryplgc69zxs216f5fc9al2ztdf4e9b9w3u1tkzo5j56oq8jnnaaqizxsdb2lye104oj5rxf296fd4fryw9pvbgs14u8qwkuq1o19etmgbtqcn03gkw7yer6wq1z4jswh9btbs41i3aexc9pc72p9bxga3mzgirkawaz5ytwyc5ot411iwlqsj5fcrm4k95v7cfvooc9v3y4y4d0oojh5266jo363bnsy4lbqlru4lxftob50dcatyggy40z1k4artd7kvzvwzfzg8d8gcw0pj2a496g6scptkv1tgvamqpv6b94f2jx2lp6m06e6u69vw9hqn4uncilxylzf9xegvrqelmcw3bmylfkbcopl6vo3qmsmamyj58j4sndbahrelii80t6249787qsdaj6ottqkoi0qsg3cr2x6j8z0iji8ogi5fezt97rnqlboqvpfba9fnmie69y466jhyyrh0jjtwizm6ni07vd9tf7gicfqzkzy21xewqyqdjqdjlsi34ta8wtpwrug6grx298mcv2irimnfvph5p8sljjwfq23law7t4u4bx9bywjysfb677kwciny6sx05amogv653gwn6uz6q2c9pp0x 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo emtqd0nwq055veypi0m9vt0v0mw5udn2abe57q4lmckvm06wfp3tkcojgjgn53mxq27jthraocgvaz7u5e2kc58ecdi3n1eeveaygflwo8byfexnqakgcjl76fioorq4567sivkr629ti2n013avpa5zphn7ckkw7m3k9actroembxv1rv30kfgy34k6voxufrtq5fx0gj62my2xu4web2d2p2ieesyycibkzgwax7nk3533x017xakbh3znedui24mlpje6k2yl0ovpi5s8afjlygf2q7zv0cq36qex1kk6nhcqftwpe4d2wd7quby08gadedwpzd823xim424oc16jdeh502ab8rn819g6bo6bnx5cp2kcvvc0pf4h055kurwa9mxujq11wsob6f9xt3kbxefyryplgc69zxs216f5fc9al2ztdf4e9b9w3u1tkzo5j56oq8jnnaaqizxsdb2lye104oj5rxf296fd4fryw9pvbgs14u8qwkuq1o19etmgbtqcn03gkw7yer6wq1z4jswh9btbs41i3aexc9pc72p9bxga3mzgirkawaz5ytwyc5ot411iwlqsj5fcrm4k95v7cfvooc9v3y4y4d0oojh5266jo363bnsy4lbqlru4lxftob50dcatyggy40z1k4artd7kvzvwzfzg8d8gcw0pj2a496g6scptkv1tgvamqpv6b94f2jx2lp6m06e6u69vw9hqn4uncilxylzf9xegvrqelmcw3bmylfkbcopl6vo3qmsmamyj58j4sndbahrelii80t6249787qsdaj6ottqkoi0qsg3cr2x6j8z0iji8ogi5fezt97rnqlboqvpfba9fnmie69y466jhyyrh0jjtwizm6ni07vd9tf7gicfqzkzy21xewqyqdjqdjlsi34ta8wtpwrug6grx298mcv2irimnfvph5p8sljjwfq23law7t4u4bx9bywjysfb677kwciny6sx05amogv653gwn6uz6q2c9pp0x 00:11:38.843 11:22:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:11:38.843 [2024-10-07 11:22:34.207791] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:38.843 [2024-10-07 11:22:34.207892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61264 ] 00:11:38.843 [2024-10-07 11:22:34.343526] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.109 [2024-10-07 11:22:34.456040] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.109 [2024-10-07 11:22:34.512008] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:39.676  [2024-10-07T11:22:35.765Z] Copying: 511/511 [MB] (average 1150 MBps) 00:11:40.242 00:11:40.242 11:22:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:11:40.242 11:22:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:11:40.242 11:22:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:40.242 11:22:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:40.242 [2024-10-07 11:22:35.655867] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:40.242 [2024-10-07 11:22:35.655979] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61285 ] 00:11:40.242 { 00:11:40.242 "subsystems": [ 00:11:40.242 { 00:11:40.242 "subsystem": "bdev", 00:11:40.242 "config": [ 00:11:40.242 { 00:11:40.242 "params": { 00:11:40.242 "block_size": 512, 00:11:40.242 "num_blocks": 1048576, 00:11:40.242 "name": "malloc0" 00:11:40.242 }, 00:11:40.242 "method": "bdev_malloc_create" 00:11:40.242 }, 00:11:40.242 { 00:11:40.242 "params": { 00:11:40.242 "filename": "/dev/zram1", 00:11:40.242 "name": "uring0" 00:11:40.242 }, 00:11:40.242 "method": "bdev_uring_create" 00:11:40.242 }, 00:11:40.242 { 00:11:40.242 "method": "bdev_wait_for_examine" 00:11:40.242 } 00:11:40.242 ] 00:11:40.242 } 00:11:40.242 ] 00:11:40.242 } 00:11:40.500 [2024-10-07 11:22:35.795551] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.500 [2024-10-07 11:22:35.905327] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.500 [2024-10-07 11:22:35.962115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:41.873  [2024-10-07T11:22:38.332Z] Copying: 207/512 [MB] (207 MBps) [2024-10-07T11:22:38.896Z] Copying: 415/512 [MB] (208 MBps) [2024-10-07T11:22:39.154Z] Copying: 512/512 [MB] (average 208 MBps) 00:11:43.631 00:11:43.632 11:22:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:11:43.632 11:22:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:11:43.632 11:22:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:43.632 11:22:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:43.632 [2024-10-07 11:22:39.107189] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:43.632 [2024-10-07 11:22:39.107375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61335 ] 00:11:43.632 { 00:11:43.632 "subsystems": [ 00:11:43.632 { 00:11:43.632 "subsystem": "bdev", 00:11:43.632 "config": [ 00:11:43.632 { 00:11:43.632 "params": { 00:11:43.632 "block_size": 512, 00:11:43.632 "num_blocks": 1048576, 00:11:43.632 "name": "malloc0" 00:11:43.632 }, 00:11:43.632 "method": "bdev_malloc_create" 00:11:43.632 }, 00:11:43.632 { 00:11:43.632 "params": { 00:11:43.632 "filename": "/dev/zram1", 00:11:43.632 "name": "uring0" 00:11:43.632 }, 00:11:43.632 "method": "bdev_uring_create" 00:11:43.632 }, 00:11:43.632 { 00:11:43.632 "method": "bdev_wait_for_examine" 00:11:43.632 } 00:11:43.632 ] 00:11:43.632 } 00:11:43.632 ] 00:11:43.632 } 00:11:43.890 [2024-10-07 11:22:39.247422] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.890 [2024-10-07 11:22:39.355813] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.890 [2024-10-07 11:22:39.412216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:45.266  [2024-10-07T11:22:41.742Z] Copying: 164/512 [MB] (164 MBps) [2024-10-07T11:22:42.678Z] Copying: 329/512 [MB] (165 MBps) [2024-10-07T11:22:42.937Z] Copying: 477/512 [MB] (148 MBps) [2024-10-07T11:22:43.502Z] Copying: 512/512 [MB] (average 159 MBps) 00:11:47.979 00:11:47.980 11:22:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:11:47.980 11:22:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ emtqd0nwq055veypi0m9vt0v0mw5udn2abe57q4lmckvm06wfp3tkcojgjgn53mxq27jthraocgvaz7u5e2kc58ecdi3n1eeveaygflwo8byfexnqakgcjl76fioorq4567sivkr629ti2n013avpa5zphn7ckkw7m3k9actroembxv1rv30kfgy34k6voxufrtq5fx0gj62my2xu4web2d2p2ieesyycibkzgwax7nk3533x017xakbh3znedui24mlpje6k2yl0ovpi5s8afjlygf2q7zv0cq36qex1kk6nhcqftwpe4d2wd7quby08gadedwpzd823xim424oc16jdeh502ab8rn819g6bo6bnx5cp2kcvvc0pf4h055kurwa9mxujq11wsob6f9xt3kbxefyryplgc69zxs216f5fc9al2ztdf4e9b9w3u1tkzo5j56oq8jnnaaqizxsdb2lye104oj5rxf296fd4fryw9pvbgs14u8qwkuq1o19etmgbtqcn03gkw7yer6wq1z4jswh9btbs41i3aexc9pc72p9bxga3mzgirkawaz5ytwyc5ot411iwlqsj5fcrm4k95v7cfvooc9v3y4y4d0oojh5266jo363bnsy4lbqlru4lxftob50dcatyggy40z1k4artd7kvzvwzfzg8d8gcw0pj2a496g6scptkv1tgvamqpv6b94f2jx2lp6m06e6u69vw9hqn4uncilxylzf9xegvrqelmcw3bmylfkbcopl6vo3qmsmamyj58j4sndbahrelii80t6249787qsdaj6ottqkoi0qsg3cr2x6j8z0iji8ogi5fezt97rnqlboqvpfba9fnmie69y466jhyyrh0jjtwizm6ni07vd9tf7gicfqzkzy21xewqyqdjqdjlsi34ta8wtpwrug6grx298mcv2irimnfvph5p8sljjwfq23law7t4u4bx9bywjysfb677kwciny6sx05amogv653gwn6uz6q2c9pp0x == \e\m\t\q\d\0\n\w\q\0\5\5\v\e\y\p\i\0\m\9\v\t\0\v\0\m\w\5\u\d\n\2\a\b\e\5\7\q\4\l\m\c\k\v\m\0\6\w\f\p\3\t\k\c\o\j\g\j\g\n\5\3\m\x\q\2\7\j\t\h\r\a\o\c\g\v\a\z\7\u\5\e\2\k\c\5\8\e\c\d\i\3\n\1\e\e\v\e\a\y\g\f\l\w\o\8\b\y\f\e\x\n\q\a\k\g\c\j\l\7\6\f\i\o\o\r\q\4\5\6\7\s\i\v\k\r\6\2\9\t\i\2\n\0\1\3\a\v\p\a\5\z\p\h\n\7\c\k\k\w\7\m\3\k\9\a\c\t\r\o\e\m\b\x\v\1\r\v\3\0\k\f\g\y\3\4\k\6\v\o\x\u\f\r\t\q\5\f\x\0\g\j\6\2\m\y\2\x\u\4\w\e\b\2\d\2\p\2\i\e\e\s\y\y\c\i\b\k\z\g\w\a\x\7\n\k\3\5\3\3\x\0\1\7\x\a\k\b\h\3\z\n\e\d\u\i\2\4\m\l\p\j\e\6\k\2\y\l\0\o\v\p\i\5\s\8\a\f\j\l\y\g\f\2\q\7\z\v\0\c\q\3\6\q\e\x\1\k\k\6\n\h\c\q\f\t\w\p\e\4\d\2\w\d\7\q\u\b\y\0\8\g\a\d\e\d\w\p\z\d\8\2\3\x\i\m\4\2\4\o\c\1\6\j\d\e\h\5\0\2\a\b\8\r\n\8\1\9\g\6\b\o\6\b\n\x\5\c\p\2\k\c\v\v\c\0\p\f\4\h\0\5\5\k\u\r\w\a\9\m\x\u\j\q\1\1\w\s\o\b\6\f\9\x\t\3\k\b\x\e\f\y\r\y\p\l\g\c\6\9\z\x\s\2\1\6\f\5\f\c\9\a\l\2\z\t\d\f\4\e\9\b\9\w\3\u\1\t\k\z\o\5\j\5\6\o\q\8\j\n\n\a\a\q\i\z\x\s\d\b\2\l\y\e\1\0\4\o\j\5\r\x\f\2\9\6\f\d\4\f\r\y\w\9\p\v\b\g\s\1\4\u\8\q\w\k\u\q\1\o\1\9\e\t\m\g\b\t\q\c\n\0\3\g\k\w\7\y\e\r\6\w\q\1\z\4\j\s\w\h\9\b\t\b\s\4\1\i\3\a\e\x\c\9\p\c\7\2\p\9\b\x\g\a\3\m\z\g\i\r\k\a\w\a\z\5\y\t\w\y\c\5\o\t\4\1\1\i\w\l\q\s\j\5\f\c\r\m\4\k\9\5\v\7\c\f\v\o\o\c\9\v\3\y\4\y\4\d\0\o\o\j\h\5\2\6\6\j\o\3\6\3\b\n\s\y\4\l\b\q\l\r\u\4\l\x\f\t\o\b\5\0\d\c\a\t\y\g\g\y\4\0\z\1\k\4\a\r\t\d\7\k\v\z\v\w\z\f\z\g\8\d\8\g\c\w\0\p\j\2\a\4\9\6\g\6\s\c\p\t\k\v\1\t\g\v\a\m\q\p\v\6\b\9\4\f\2\j\x\2\l\p\6\m\0\6\e\6\u\6\9\v\w\9\h\q\n\4\u\n\c\i\l\x\y\l\z\f\9\x\e\g\v\r\q\e\l\m\c\w\3\b\m\y\l\f\k\b\c\o\p\l\6\v\o\3\q\m\s\m\a\m\y\j\5\8\j\4\s\n\d\b\a\h\r\e\l\i\i\8\0\t\6\2\4\9\7\8\7\q\s\d\a\j\6\o\t\t\q\k\o\i\0\q\s\g\3\c\r\2\x\6\j\8\z\0\i\j\i\8\o\g\i\5\f\e\z\t\9\7\r\n\q\l\b\o\q\v\p\f\b\a\9\f\n\m\i\e\6\9\y\4\6\6\j\h\y\y\r\h\0\j\j\t\w\i\z\m\6\n\i\0\7\v\d\9\t\f\7\g\i\c\f\q\z\k\z\y\2\1\x\e\w\q\y\q\d\j\q\d\j\l\s\i\3\4\t\a\8\w\t\p\w\r\u\g\6\g\r\x\2\9\8\m\c\v\2\i\r\i\m\n\f\v\p\h\5\p\8\s\l\j\j\w\f\q\2\3\l\a\w\7\t\4\u\4\b\x\9\b\y\w\j\y\s\f\b\6\7\7\k\w\c\i\n\y\6\s\x\0\5\a\m\o\g\v\6\5\3\g\w\n\6\u\z\6\q\2\c\9\p\p\0\x ]] 00:11:47.980 11:22:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:11:47.980 11:22:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ emtqd0nwq055veypi0m9vt0v0mw5udn2abe57q4lmckvm06wfp3tkcojgjgn53mxq27jthraocgvaz7u5e2kc58ecdi3n1eeveaygflwo8byfexnqakgcjl76fioorq4567sivkr629ti2n013avpa5zphn7ckkw7m3k9actroembxv1rv30kfgy34k6voxufrtq5fx0gj62my2xu4web2d2p2ieesyycibkzgwax7nk3533x017xakbh3znedui24mlpje6k2yl0ovpi5s8afjlygf2q7zv0cq36qex1kk6nhcqftwpe4d2wd7quby08gadedwpzd823xim424oc16jdeh502ab8rn819g6bo6bnx5cp2kcvvc0pf4h055kurwa9mxujq11wsob6f9xt3kbxefyryplgc69zxs216f5fc9al2ztdf4e9b9w3u1tkzo5j56oq8jnnaaqizxsdb2lye104oj5rxf296fd4fryw9pvbgs14u8qwkuq1o19etmgbtqcn03gkw7yer6wq1z4jswh9btbs41i3aexc9pc72p9bxga3mzgirkawaz5ytwyc5ot411iwlqsj5fcrm4k95v7cfvooc9v3y4y4d0oojh5266jo363bnsy4lbqlru4lxftob50dcatyggy40z1k4artd7kvzvwzfzg8d8gcw0pj2a496g6scptkv1tgvamqpv6b94f2jx2lp6m06e6u69vw9hqn4uncilxylzf9xegvrqelmcw3bmylfkbcopl6vo3qmsmamyj58j4sndbahrelii80t6249787qsdaj6ottqkoi0qsg3cr2x6j8z0iji8ogi5fezt97rnqlboqvpfba9fnmie69y466jhyyrh0jjtwizm6ni07vd9tf7gicfqzkzy21xewqyqdjqdjlsi34ta8wtpwrug6grx298mcv2irimnfvph5p8sljjwfq23law7t4u4bx9bywjysfb677kwciny6sx05amogv653gwn6uz6q2c9pp0x == \e\m\t\q\d\0\n\w\q\0\5\5\v\e\y\p\i\0\m\9\v\t\0\v\0\m\w\5\u\d\n\2\a\b\e\5\7\q\4\l\m\c\k\v\m\0\6\w\f\p\3\t\k\c\o\j\g\j\g\n\5\3\m\x\q\2\7\j\t\h\r\a\o\c\g\v\a\z\7\u\5\e\2\k\c\5\8\e\c\d\i\3\n\1\e\e\v\e\a\y\g\f\l\w\o\8\b\y\f\e\x\n\q\a\k\g\c\j\l\7\6\f\i\o\o\r\q\4\5\6\7\s\i\v\k\r\6\2\9\t\i\2\n\0\1\3\a\v\p\a\5\z\p\h\n\7\c\k\k\w\7\m\3\k\9\a\c\t\r\o\e\m\b\x\v\1\r\v\3\0\k\f\g\y\3\4\k\6\v\o\x\u\f\r\t\q\5\f\x\0\g\j\6\2\m\y\2\x\u\4\w\e\b\2\d\2\p\2\i\e\e\s\y\y\c\i\b\k\z\g\w\a\x\7\n\k\3\5\3\3\x\0\1\7\x\a\k\b\h\3\z\n\e\d\u\i\2\4\m\l\p\j\e\6\k\2\y\l\0\o\v\p\i\5\s\8\a\f\j\l\y\g\f\2\q\7\z\v\0\c\q\3\6\q\e\x\1\k\k\6\n\h\c\q\f\t\w\p\e\4\d\2\w\d\7\q\u\b\y\0\8\g\a\d\e\d\w\p\z\d\8\2\3\x\i\m\4\2\4\o\c\1\6\j\d\e\h\5\0\2\a\b\8\r\n\8\1\9\g\6\b\o\6\b\n\x\5\c\p\2\k\c\v\v\c\0\p\f\4\h\0\5\5\k\u\r\w\a\9\m\x\u\j\q\1\1\w\s\o\b\6\f\9\x\t\3\k\b\x\e\f\y\r\y\p\l\g\c\6\9\z\x\s\2\1\6\f\5\f\c\9\a\l\2\z\t\d\f\4\e\9\b\9\w\3\u\1\t\k\z\o\5\j\5\6\o\q\8\j\n\n\a\a\q\i\z\x\s\d\b\2\l\y\e\1\0\4\o\j\5\r\x\f\2\9\6\f\d\4\f\r\y\w\9\p\v\b\g\s\1\4\u\8\q\w\k\u\q\1\o\1\9\e\t\m\g\b\t\q\c\n\0\3\g\k\w\7\y\e\r\6\w\q\1\z\4\j\s\w\h\9\b\t\b\s\4\1\i\3\a\e\x\c\9\p\c\7\2\p\9\b\x\g\a\3\m\z\g\i\r\k\a\w\a\z\5\y\t\w\y\c\5\o\t\4\1\1\i\w\l\q\s\j\5\f\c\r\m\4\k\9\5\v\7\c\f\v\o\o\c\9\v\3\y\4\y\4\d\0\o\o\j\h\5\2\6\6\j\o\3\6\3\b\n\s\y\4\l\b\q\l\r\u\4\l\x\f\t\o\b\5\0\d\c\a\t\y\g\g\y\4\0\z\1\k\4\a\r\t\d\7\k\v\z\v\w\z\f\z\g\8\d\8\g\c\w\0\p\j\2\a\4\9\6\g\6\s\c\p\t\k\v\1\t\g\v\a\m\q\p\v\6\b\9\4\f\2\j\x\2\l\p\6\m\0\6\e\6\u\6\9\v\w\9\h\q\n\4\u\n\c\i\l\x\y\l\z\f\9\x\e\g\v\r\q\e\l\m\c\w\3\b\m\y\l\f\k\b\c\o\p\l\6\v\o\3\q\m\s\m\a\m\y\j\5\8\j\4\s\n\d\b\a\h\r\e\l\i\i\8\0\t\6\2\4\9\7\8\7\q\s\d\a\j\6\o\t\t\q\k\o\i\0\q\s\g\3\c\r\2\x\6\j\8\z\0\i\j\i\8\o\g\i\5\f\e\z\t\9\7\r\n\q\l\b\o\q\v\p\f\b\a\9\f\n\m\i\e\6\9\y\4\6\6\j\h\y\y\r\h\0\j\j\t\w\i\z\m\6\n\i\0\7\v\d\9\t\f\7\g\i\c\f\q\z\k\z\y\2\1\x\e\w\q\y\q\d\j\q\d\j\l\s\i\3\4\t\a\8\w\t\p\w\r\u\g\6\g\r\x\2\9\8\m\c\v\2\i\r\i\m\n\f\v\p\h\5\p\8\s\l\j\j\w\f\q\2\3\l\a\w\7\t\4\u\4\b\x\9\b\y\w\j\y\s\f\b\6\7\7\k\w\c\i\n\y\6\s\x\0\5\a\m\o\g\v\6\5\3\g\w\n\6\u\z\6\q\2\c\9\p\p\0\x ]] 00:11:47.980 11:22:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:48.237 11:22:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:11:48.237 11:22:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:11:48.237 11:22:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:48.237 11:22:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:48.237 [2024-10-07 11:22:43.670653] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:48.237 [2024-10-07 11:22:43.671080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61403 ] 00:11:48.237 { 00:11:48.237 "subsystems": [ 00:11:48.237 { 00:11:48.237 "subsystem": "bdev", 00:11:48.237 "config": [ 00:11:48.237 { 00:11:48.237 "params": { 00:11:48.237 "block_size": 512, 00:11:48.237 "num_blocks": 1048576, 00:11:48.238 "name": "malloc0" 00:11:48.238 }, 00:11:48.238 "method": "bdev_malloc_create" 00:11:48.238 }, 00:11:48.238 { 00:11:48.238 "params": { 00:11:48.238 "filename": "/dev/zram1", 00:11:48.238 "name": "uring0" 00:11:48.238 }, 00:11:48.238 "method": "bdev_uring_create" 00:11:48.238 }, 00:11:48.238 { 00:11:48.238 "method": "bdev_wait_for_examine" 00:11:48.238 } 00:11:48.238 ] 00:11:48.238 } 00:11:48.238 ] 00:11:48.238 } 00:11:48.495 [2024-10-07 11:22:43.810201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.495 [2024-10-07 11:22:43.925231] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.495 [2024-10-07 11:22:43.983585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:49.877  [2024-10-07T11:22:46.334Z] Copying: 135/512 [MB] (135 MBps) [2024-10-07T11:22:47.267Z] Copying: 269/512 [MB] (134 MBps) [2024-10-07T11:22:48.203Z] Copying: 405/512 [MB] (135 MBps) [2024-10-07T11:22:48.462Z] Copying: 512/512 [MB] (average 134 MBps) 00:11:52.939 00:11:52.939 11:22:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:11:52.939 11:22:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:11:52.939 11:22:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:11:52.939 11:22:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:11:52.939 11:22:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:11:52.939 11:22:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:11:52.939 11:22:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:52.939 11:22:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:53.198 [2024-10-07 11:22:48.463965] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:53.198 [2024-10-07 11:22:48.464076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61475 ] 00:11:53.198 { 00:11:53.198 "subsystems": [ 00:11:53.198 { 00:11:53.198 "subsystem": "bdev", 00:11:53.198 "config": [ 00:11:53.198 { 00:11:53.198 "params": { 00:11:53.198 "block_size": 512, 00:11:53.198 "num_blocks": 1048576, 00:11:53.198 "name": "malloc0" 00:11:53.198 }, 00:11:53.198 "method": "bdev_malloc_create" 00:11:53.198 }, 00:11:53.198 { 00:11:53.198 "params": { 00:11:53.198 "filename": "/dev/zram1", 00:11:53.198 "name": "uring0" 00:11:53.198 }, 00:11:53.198 "method": "bdev_uring_create" 00:11:53.198 }, 00:11:53.198 { 00:11:53.198 "params": { 00:11:53.198 "name": "uring0" 00:11:53.198 }, 00:11:53.198 "method": "bdev_uring_delete" 00:11:53.198 }, 00:11:53.198 { 00:11:53.198 "method": "bdev_wait_for_examine" 00:11:53.198 } 00:11:53.198 ] 00:11:53.198 } 00:11:53.198 ] 00:11:53.198 } 00:11:53.198 [2024-10-07 11:22:48.604507] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.198 [2024-10-07 11:22:48.720945] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.456 [2024-10-07 11:22:48.776424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:53.714  [2024-10-07T11:22:49.496Z] Copying: 0/0 [B] (average 0 Bps) 00:11:53.973 00:11:53.973 11:22:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:53.973 11:22:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:11:53.973 11:22:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:53.973 11:22:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:53.973 11:22:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:11:53.973 11:22:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:11:53.973 11:22:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:53.973 11:22:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:53.973 11:22:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:53.973 11:22:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:53.973 11:22:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:53.973 11:22:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:53.973 11:22:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:53.973 11:22:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:53.973 11:22:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:53.973 11:22:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:53.973 [2024-10-07 11:22:49.458526] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:53.973 [2024-10-07 11:22:49.458623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61503 ] 00:11:53.973 { 00:11:53.973 "subsystems": [ 00:11:53.973 { 00:11:53.973 "subsystem": "bdev", 00:11:53.973 "config": [ 00:11:53.973 { 00:11:53.973 "params": { 00:11:53.973 "block_size": 512, 00:11:53.973 "num_blocks": 1048576, 00:11:53.973 "name": "malloc0" 00:11:53.973 }, 00:11:53.973 "method": "bdev_malloc_create" 00:11:53.973 }, 00:11:53.973 { 00:11:53.973 "params": { 00:11:53.973 "filename": "/dev/zram1", 00:11:53.973 "name": "uring0" 00:11:53.973 }, 00:11:53.973 "method": "bdev_uring_create" 00:11:53.973 }, 00:11:53.973 { 00:11:53.973 "params": { 00:11:53.973 "name": "uring0" 00:11:53.973 }, 00:11:53.973 "method": "bdev_uring_delete" 00:11:53.973 }, 00:11:53.973 { 00:11:53.973 "method": "bdev_wait_for_examine" 00:11:53.973 } 00:11:53.973 ] 00:11:53.973 } 00:11:53.973 ] 00:11:53.973 } 00:11:54.231 [2024-10-07 11:22:49.597393] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.231 [2024-10-07 11:22:49.711580] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.488 [2024-10-07 11:22:49.766749] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:54.488 [2024-10-07 11:22:49.973142] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:11:54.488 [2024-10-07 11:22:49.973217] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:11:54.488 [2024-10-07 11:22:49.973230] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:11:54.488 [2024-10-07 11:22:49.973240] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:55.053 [2024-10-07 11:22:50.294296] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:55.053 11:22:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:11:55.053 11:22:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:55.053 11:22:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:11:55.053 11:22:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:11:55.053 11:22:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:11:55.053 11:22:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:55.053 11:22:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:11:55.053 11:22:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:11:55.053 11:22:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:11:55.053 11:22:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:11:55.053 11:22:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:11:55.053 11:22:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:55.311 00:11:55.311 ************************************ 00:11:55.311 END TEST dd_uring_copy 00:11:55.311 ************************************ 00:11:55.311 real 0m16.571s 00:11:55.311 user 0m11.057s 00:11:55.311 sys 0m13.888s 00:11:55.311 11:22:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:55.311 11:22:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:55.311 00:11:55.311 real 0m16.817s 00:11:55.311 user 0m11.184s 00:11:55.311 sys 0m14.008s 00:11:55.311 11:22:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:55.311 11:22:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:11:55.311 ************************************ 00:11:55.311 END TEST spdk_dd_uring 00:11:55.311 ************************************ 00:11:55.311 11:22:50 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:11:55.311 11:22:50 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:55.311 11:22:50 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.311 11:22:50 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:55.311 ************************************ 00:11:55.311 START TEST spdk_dd_sparse 00:11:55.311 ************************************ 00:11:55.311 11:22:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:11:55.569 * Looking for test storage... 00:11:55.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lcov --version 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:55.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.569 --rc genhtml_branch_coverage=1 00:11:55.569 --rc genhtml_function_coverage=1 00:11:55.569 --rc genhtml_legend=1 00:11:55.569 --rc geninfo_all_blocks=1 00:11:55.569 --rc geninfo_unexecuted_blocks=1 00:11:55.569 00:11:55.569 ' 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:55.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.569 --rc genhtml_branch_coverage=1 00:11:55.569 --rc genhtml_function_coverage=1 00:11:55.569 --rc genhtml_legend=1 00:11:55.569 --rc geninfo_all_blocks=1 00:11:55.569 --rc geninfo_unexecuted_blocks=1 00:11:55.569 00:11:55.569 ' 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:55.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.569 --rc genhtml_branch_coverage=1 00:11:55.569 --rc genhtml_function_coverage=1 00:11:55.569 --rc genhtml_legend=1 00:11:55.569 --rc geninfo_all_blocks=1 00:11:55.569 --rc geninfo_unexecuted_blocks=1 00:11:55.569 00:11:55.569 ' 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:55.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.569 --rc genhtml_branch_coverage=1 00:11:55.569 --rc genhtml_function_coverage=1 00:11:55.569 --rc genhtml_legend=1 00:11:55.569 --rc geninfo_all_blocks=1 00:11:55.569 --rc geninfo_unexecuted_blocks=1 00:11:55.569 00:11:55.569 ' 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.569 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.570 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.570 11:22:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.570 11:22:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.570 11:22:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.570 11:22:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.570 11:22:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:11:55.570 11:22:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.570 11:22:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:11:55.570 11:22:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:11:55.570 11:22:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:11:55.570 11:22:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:11:55.570 11:22:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:11:55.570 11:22:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:11:55.570 11:22:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:11:55.570 11:22:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:11:55.570 11:22:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:11:55.570 11:22:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:11:55.570 11:22:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:11:55.570 1+0 records in 00:11:55.570 1+0 records out 00:11:55.570 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00609793 s, 688 MB/s 00:11:55.570 11:22:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:11:55.570 1+0 records in 00:11:55.570 1+0 records out 00:11:55.570 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00496324 s, 845 MB/s 00:11:55.570 11:22:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:11:55.570 1+0 records in 00:11:55.570 1+0 records out 00:11:55.570 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0049843 s, 842 MB/s 00:11:55.570 11:22:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:11:55.570 11:22:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:55.570 11:22:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.570 11:22:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:55.570 ************************************ 00:11:55.570 START TEST dd_sparse_file_to_file 00:11:55.570 ************************************ 00:11:55.570 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:11:55.570 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:11:55.570 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:11:55.570 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:55.570 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:11:55.570 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:11:55.570 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:11:55.570 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:11:55.570 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:11:55.570 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:11:55.570 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:55.570 { 00:11:55.570 "subsystems": [ 00:11:55.570 { 00:11:55.570 "subsystem": "bdev", 00:11:55.570 "config": [ 00:11:55.570 { 00:11:55.570 "params": { 00:11:55.570 "block_size": 4096, 00:11:55.570 "filename": "dd_sparse_aio_disk", 00:11:55.570 "name": "dd_aio" 00:11:55.570 }, 00:11:55.570 "method": "bdev_aio_create" 00:11:55.570 }, 00:11:55.570 { 00:11:55.570 "params": { 00:11:55.570 "lvs_name": "dd_lvstore", 00:11:55.570 "bdev_name": "dd_aio" 00:11:55.570 }, 00:11:55.570 "method": "bdev_lvol_create_lvstore" 00:11:55.570 }, 00:11:55.570 { 00:11:55.570 "method": "bdev_wait_for_examine" 00:11:55.570 } 00:11:55.570 ] 00:11:55.570 } 00:11:55.570 ] 00:11:55.570 } 00:11:55.570 [2024-10-07 11:22:51.086607] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:55.570 [2024-10-07 11:22:51.086714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61599 ] 00:11:55.828 [2024-10-07 11:22:51.223698] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.086 [2024-10-07 11:22:51.359281] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.086 [2024-10-07 11:22:51.414120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:56.086  [2024-10-07T11:22:51.867Z] Copying: 12/36 [MB] (average 857 MBps) 00:11:56.344 00:11:56.344 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:11:56.344 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:11:56.344 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:11:56.344 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:11:56.344 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:11:56.344 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:11:56.344 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:11:56.344 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:11:56.344 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:11:56.344 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:11:56.344 00:11:56.344 real 0m0.766s 00:11:56.344 user 0m0.494s 00:11:56.344 sys 0m0.361s 00:11:56.345 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.345 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:56.345 ************************************ 00:11:56.345 END TEST dd_sparse_file_to_file 00:11:56.345 ************************************ 00:11:56.345 11:22:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:11:56.345 11:22:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:56.345 11:22:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.345 11:22:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:56.345 ************************************ 00:11:56.345 START TEST dd_sparse_file_to_bdev 00:11:56.345 ************************************ 00:11:56.345 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:11:56.345 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:56.345 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:11:56.345 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:11:56.345 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:11:56.345 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:11:56.345 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:11:56.345 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:56.345 11:22:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:56.603 { 00:11:56.603 "subsystems": [ 00:11:56.603 { 00:11:56.603 "subsystem": "bdev", 00:11:56.603 "config": [ 00:11:56.603 { 00:11:56.603 "params": { 00:11:56.603 "block_size": 4096, 00:11:56.603 "filename": "dd_sparse_aio_disk", 00:11:56.603 "name": "dd_aio" 00:11:56.603 }, 00:11:56.603 "method": "bdev_aio_create" 00:11:56.603 }, 00:11:56.603 { 00:11:56.603 "params": { 00:11:56.603 "lvs_name": "dd_lvstore", 00:11:56.603 "lvol_name": "dd_lvol", 00:11:56.603 "size_in_mib": 36, 00:11:56.603 "thin_provision": true 00:11:56.603 }, 00:11:56.603 "method": "bdev_lvol_create" 00:11:56.603 }, 00:11:56.603 { 00:11:56.603 "method": "bdev_wait_for_examine" 00:11:56.603 } 00:11:56.603 ] 00:11:56.603 } 00:11:56.603 ] 00:11:56.603 } 00:11:56.603 [2024-10-07 11:22:51.930872] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:56.603 [2024-10-07 11:22:51.931048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61646 ] 00:11:56.603 [2024-10-07 11:22:52.084556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.861 [2024-10-07 11:22:52.193073] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.861 [2024-10-07 11:22:52.248708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:56.861  [2024-10-07T11:22:52.641Z] Copying: 12/36 [MB] (average 571 MBps) 00:11:57.118 00:11:57.118 00:11:57.118 real 0m0.735s 00:11:57.118 user 0m0.461s 00:11:57.118 sys 0m0.369s 00:11:57.118 11:22:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.118 11:22:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:57.119 ************************************ 00:11:57.119 END TEST dd_sparse_file_to_bdev 00:11:57.119 ************************************ 00:11:57.119 11:22:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:11:57.119 11:22:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:57.119 11:22:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.119 11:22:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:57.119 ************************************ 00:11:57.119 START TEST dd_sparse_bdev_to_file 00:11:57.119 ************************************ 00:11:57.119 11:22:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:11:57.119 11:22:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:11:57.119 11:22:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:11:57.119 11:22:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:57.119 11:22:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:11:57.119 11:22:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:11:57.119 11:22:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:11:57.119 11:22:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:11:57.119 11:22:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:57.377 { 00:11:57.377 "subsystems": [ 00:11:57.377 { 00:11:57.377 "subsystem": "bdev", 00:11:57.377 "config": [ 00:11:57.377 { 00:11:57.377 "params": { 00:11:57.377 "block_size": 4096, 00:11:57.377 "filename": "dd_sparse_aio_disk", 00:11:57.377 "name": "dd_aio" 00:11:57.377 }, 00:11:57.377 "method": "bdev_aio_create" 00:11:57.377 }, 00:11:57.377 { 00:11:57.377 "method": "bdev_wait_for_examine" 00:11:57.377 } 00:11:57.377 ] 00:11:57.377 } 00:11:57.377 ] 00:11:57.377 } 00:11:57.377 [2024-10-07 11:22:52.682698] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:57.377 [2024-10-07 11:22:52.682820] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61684 ] 00:11:57.377 [2024-10-07 11:22:52.820981] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.636 [2024-10-07 11:22:52.930723] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.636 [2024-10-07 11:22:52.985315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:57.636  [2024-10-07T11:22:53.418Z] Copying: 12/36 [MB] (average 923 MBps) 00:11:57.895 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:11:57.895 00:11:57.895 real 0m0.713s 00:11:57.895 user 0m0.469s 00:11:57.895 sys 0m0.350s 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:57.895 ************************************ 00:11:57.895 END TEST dd_sparse_bdev_to_file 00:11:57.895 ************************************ 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:11:57.895 00:11:57.895 real 0m2.610s 00:11:57.895 user 0m1.594s 00:11:57.895 sys 0m1.306s 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.895 ************************************ 00:11:57.895 END TEST spdk_dd_sparse 00:11:57.895 11:22:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:57.895 ************************************ 00:11:58.154 11:22:53 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:11:58.154 11:22:53 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:58.154 11:22:53 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.154 11:22:53 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:58.154 ************************************ 00:11:58.154 START TEST spdk_dd_negative 00:11:58.154 ************************************ 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:11:58.154 * Looking for test storage... 00:11:58.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lcov --version 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:11:58.154 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:58.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.155 --rc genhtml_branch_coverage=1 00:11:58.155 --rc genhtml_function_coverage=1 00:11:58.155 --rc genhtml_legend=1 00:11:58.155 --rc geninfo_all_blocks=1 00:11:58.155 --rc geninfo_unexecuted_blocks=1 00:11:58.155 00:11:58.155 ' 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:58.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.155 --rc genhtml_branch_coverage=1 00:11:58.155 --rc genhtml_function_coverage=1 00:11:58.155 --rc genhtml_legend=1 00:11:58.155 --rc geninfo_all_blocks=1 00:11:58.155 --rc geninfo_unexecuted_blocks=1 00:11:58.155 00:11:58.155 ' 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:58.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.155 --rc genhtml_branch_coverage=1 00:11:58.155 --rc genhtml_function_coverage=1 00:11:58.155 --rc genhtml_legend=1 00:11:58.155 --rc geninfo_all_blocks=1 00:11:58.155 --rc geninfo_unexecuted_blocks=1 00:11:58.155 00:11:58.155 ' 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:58.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.155 --rc genhtml_branch_coverage=1 00:11:58.155 --rc genhtml_function_coverage=1 00:11:58.155 --rc genhtml_legend=1 00:11:58.155 --rc geninfo_all_blocks=1 00:11:58.155 --rc geninfo_unexecuted_blocks=1 00:11:58.155 00:11:58.155 ' 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:58.155 ************************************ 00:11:58.155 START TEST dd_invalid_arguments 00:11:58.155 ************************************ 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:58.155 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.414 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.414 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.414 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.414 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.414 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.414 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.414 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:58.414 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:58.414 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:11:58.414 00:11:58.414 CPU options: 00:11:58.414 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:11:58.414 (like [0,1,10]) 00:11:58.414 --lcores lcore to CPU mapping list. The list is in the format: 00:11:58.414 [<,lcores[@CPUs]>...] 00:11:58.414 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:11:58.414 Within the group, '-' is used for range separator, 00:11:58.414 ',' is used for single number separator. 00:11:58.414 '( )' can be omitted for single element group, 00:11:58.414 '@' can be omitted if cpus and lcores have the same value 00:11:58.414 --disable-cpumask-locks Disable CPU core lock files. 00:11:58.414 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:11:58.414 pollers in the app support interrupt mode) 00:11:58.414 -p, --main-core main (primary) core for DPDK 00:11:58.414 00:11:58.414 Configuration options: 00:11:58.414 -c, --config, --json JSON config file 00:11:58.414 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:11:58.414 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:11:58.414 --wait-for-rpc wait for RPCs to initialize subsystems 00:11:58.414 --rpcs-allowed comma-separated list of permitted RPCS 00:11:58.414 --json-ignore-init-errors don't exit on invalid config entry 00:11:58.414 00:11:58.414 Memory options: 00:11:58.414 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:11:58.414 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:11:58.414 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:11:58.414 -R, --huge-unlink unlink huge files after initialization 00:11:58.414 -n, --mem-channels number of memory channels used for DPDK 00:11:58.414 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:11:58.414 --msg-mempool-size global message memory pool size in count (default: 262143) 00:11:58.414 --no-huge run without using hugepages 00:11:58.414 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:11:58.414 -i, --shm-id shared memory ID (optional) 00:11:58.414 -g, --single-file-segments force creating just one hugetlbfs file 00:11:58.414 00:11:58.414 PCI options: 00:11:58.414 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:11:58.414 -B, --pci-blocked pci addr to block (can be used more than once) 00:11:58.414 -u, --no-pci disable PCI access 00:11:58.414 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:11:58.414 00:11:58.414 Log options: 00:11:58.414 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:11:58.414 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:11:58.414 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:11:58.414 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:11:58.414 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:11:58.414 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:11:58.414 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:11:58.414 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:11:58.414 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:11:58.414 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:11:58.414 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:11:58.414 --silence-noticelog disable notice level logging to stderr 00:11:58.414 00:11:58.414 Trace options: 00:11:58.414 --num-trace-entries number of trace entries for each core, must be power of 2, 00:11:58.414 setting 0 to disable trace (default 32768) 00:11:58.414 Tracepoints vary in size and can use more than one trace entry. 00:11:58.414 -e, --tpoint-group [:] 00:11:58.414 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:11:58.414 [2024-10-07 11:22:53.732932] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:11:58.414 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:11:58.414 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:11:58.414 bdev_raid, scheduler, all). 00:11:58.414 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:11:58.414 a tracepoint group. First tpoint inside a group can be enabled by 00:11:58.414 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:11:58.414 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:11:58.414 in /include/spdk_internal/trace_defs.h 00:11:58.414 00:11:58.414 Other options: 00:11:58.414 -h, --help show this usage 00:11:58.414 -v, --version print SPDK version 00:11:58.414 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:11:58.414 --env-context Opaque context for use of the env implementation 00:11:58.414 00:11:58.414 Application specific: 00:11:58.414 [--------- DD Options ---------] 00:11:58.414 --if Input file. Must specify either --if or --ib. 00:11:58.414 --ib Input bdev. Must specifier either --if or --ib 00:11:58.414 --of Output file. Must specify either --of or --ob. 00:11:58.414 --ob Output bdev. Must specify either --of or --ob. 00:11:58.414 --iflag Input file flags. 00:11:58.414 --oflag Output file flags. 00:11:58.414 --bs I/O unit size (default: 4096) 00:11:58.414 --qd Queue depth (default: 2) 00:11:58.414 --count I/O unit count. The number of I/O units to copy. (default: all) 00:11:58.414 --skip Skip this many I/O units at start of input. (default: 0) 00:11:58.414 --seek Skip this many I/O units at start of output. (default: 0) 00:11:58.414 --aio Force usage of AIO. (by default io_uring is used if available) 00:11:58.414 --sparse Enable hole skipping in input target 00:11:58.414 Available iflag and oflag values: 00:11:58.414 append - append mode 00:11:58.414 direct - use direct I/O for data 00:11:58.414 directory - fail unless a directory 00:11:58.414 dsync - use synchronized I/O for data 00:11:58.414 noatime - do not update access time 00:11:58.414 noctty - do not assign controlling terminal from file 00:11:58.414 nofollow - do not follow symlinks 00:11:58.414 nonblock - use non-blocking I/O 00:11:58.414 sync - use synchronized I/O for data and metadata 00:11:58.414 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:11:58.414 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:58.414 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:58.414 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:58.414 00:11:58.414 real 0m0.076s 00:11:58.415 user 0m0.050s 00:11:58.415 sys 0m0.025s 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:11:58.415 ************************************ 00:11:58.415 END TEST dd_invalid_arguments 00:11:58.415 ************************************ 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:58.415 ************************************ 00:11:58.415 START TEST dd_double_input 00:11:58.415 ************************************ 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:58.415 [2024-10-07 11:22:53.856332] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:58.415 00:11:58.415 real 0m0.068s 00:11:58.415 user 0m0.040s 00:11:58.415 sys 0m0.027s 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:11:58.415 ************************************ 00:11:58.415 END TEST dd_double_input 00:11:58.415 ************************************ 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:58.415 ************************************ 00:11:58.415 START TEST dd_double_output 00:11:58.415 ************************************ 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:58.415 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:58.673 [2024-10-07 11:22:53.980232] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:11:58.673 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:11:58.673 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:58.673 ************************************ 00:11:58.673 END TEST dd_double_output 00:11:58.673 ************************************ 00:11:58.673 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:58.673 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:58.673 00:11:58.673 real 0m0.076s 00:11:58.673 user 0m0.056s 00:11:58.673 sys 0m0.018s 00:11:58.673 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.673 11:22:53 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:58.673 ************************************ 00:11:58.673 START TEST dd_no_input 00:11:58.673 ************************************ 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:58.673 [2024-10-07 11:22:54.104338] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:58.673 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:58.674 00:11:58.674 real 0m0.074s 00:11:58.674 user 0m0.046s 00:11:58.674 sys 0m0.027s 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.674 ************************************ 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:11:58.674 END TEST dd_no_input 00:11:58.674 ************************************ 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:58.674 ************************************ 00:11:58.674 START TEST dd_no_output 00:11:58.674 ************************************ 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:58.674 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:58.932 [2024-10-07 11:22:54.232870] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:11:58.932 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:11:58.932 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:58.932 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:58.932 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:58.932 00:11:58.932 real 0m0.080s 00:11:58.932 user 0m0.048s 00:11:58.932 sys 0m0.030s 00:11:58.932 ************************************ 00:11:58.932 END TEST dd_no_output 00:11:58.932 ************************************ 00:11:58.932 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.932 11:22:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:11:58.932 11:22:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:11:58.932 11:22:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:58.932 11:22:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.932 11:22:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:58.932 ************************************ 00:11:58.932 START TEST dd_wrong_blocksize 00:11:58.932 ************************************ 00:11:58.932 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:11:58.932 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:58.932 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:58.933 [2024-10-07 11:22:54.363558] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:58.933 ************************************ 00:11:58.933 END TEST dd_wrong_blocksize 00:11:58.933 ************************************ 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:58.933 00:11:58.933 real 0m0.078s 00:11:58.933 user 0m0.051s 00:11:58.933 sys 0m0.025s 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:58.933 ************************************ 00:11:58.933 START TEST dd_smaller_blocksize 00:11:58.933 ************************************ 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:58.933 11:22:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:59.191 [2024-10-07 11:22:54.500944] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:11:59.192 [2024-10-07 11:22:54.501069] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61916 ] 00:11:59.192 [2024-10-07 11:22:54.642598] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.450 [2024-10-07 11:22:54.766752] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.450 [2024-10-07 11:22:54.824882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:59.709 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:11:59.968 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:11:59.968 [2024-10-07 11:22:55.453882] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:11:59.968 [2024-10-07 11:22:55.453969] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:00.226 [2024-10-07 11:22:55.577262] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:00.226 ************************************ 00:12:00.226 END TEST dd_smaller_blocksize 00:12:00.226 ************************************ 00:12:00.226 00:12:00.226 real 0m1.242s 00:12:00.226 user 0m0.482s 00:12:00.226 sys 0m0.649s 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:00.226 ************************************ 00:12:00.226 START TEST dd_invalid_count 00:12:00.226 ************************************ 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:00.226 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:00.485 [2024-10-07 11:22:55.783162] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:00.485 00:12:00.485 real 0m0.068s 00:12:00.485 user 0m0.042s 00:12:00.485 sys 0m0.023s 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:00.485 ************************************ 00:12:00.485 END TEST dd_invalid_count 00:12:00.485 ************************************ 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:00.485 ************************************ 00:12:00.485 START TEST dd_invalid_oflag 00:12:00.485 ************************************ 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:00.485 [2024-10-07 11:22:55.914747] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:12:00.485 ************************************ 00:12:00.485 END TEST dd_invalid_oflag 00:12:00.485 ************************************ 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:00.485 00:12:00.485 real 0m0.082s 00:12:00.485 user 0m0.057s 00:12:00.485 sys 0m0.022s 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:00.485 ************************************ 00:12:00.485 START TEST dd_invalid_iflag 00:12:00.485 ************************************ 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.485 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:00.486 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:00.486 11:22:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:00.743 [2024-10-07 11:22:56.048220] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:00.743 00:12:00.743 real 0m0.081s 00:12:00.743 user 0m0.046s 00:12:00.743 sys 0m0.032s 00:12:00.743 ************************************ 00:12:00.743 END TEST dd_invalid_iflag 00:12:00.743 ************************************ 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:00.743 ************************************ 00:12:00.743 START TEST dd_unknown_flag 00:12:00.743 ************************************ 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:00.743 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:00.743 [2024-10-07 11:22:56.183885] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:12:00.743 [2024-10-07 11:22:56.183978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62008 ] 00:12:01.002 [2024-10-07 11:22:56.322217] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.002 [2024-10-07 11:22:56.450526] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.002 [2024-10-07 11:22:56.517706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:01.260 [2024-10-07 11:22:56.565187] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:12:01.260 [2024-10-07 11:22:56.565286] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:01.260 [2024-10-07 11:22:56.565435] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:12:01.260 [2024-10-07 11:22:56.565465] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:01.260 [2024-10-07 11:22:56.565809] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:12:01.260 [2024-10-07 11:22:56.565837] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:01.260 [2024-10-07 11:22:56.565919] app.c:1047:app_stop: *NOTICE*: spdk_app_stop called twice 00:12:01.260 [2024-10-07 11:22:56.565947] app.c:1047:app_stop: *NOTICE*: spdk_app_stop called twice 00:12:01.260 [2024-10-07 11:22:56.696065] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:01.519 ************************************ 00:12:01.519 END TEST dd_unknown_flag 00:12:01.519 ************************************ 00:12:01.519 00:12:01.519 real 0m0.678s 00:12:01.519 user 0m0.394s 00:12:01.519 sys 0m0.188s 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:01.519 ************************************ 00:12:01.519 START TEST dd_invalid_json 00:12:01.519 ************************************ 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.519 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:01.520 11:22:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:01.520 [2024-10-07 11:22:56.909395] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:12:01.520 [2024-10-07 11:22:56.909504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62042 ] 00:12:01.520 [2024-10-07 11:22:57.041817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.779 [2024-10-07 11:22:57.154181] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.779 [2024-10-07 11:22:57.154271] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:12:01.779 [2024-10-07 11:22:57.154285] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:01.779 [2024-10-07 11:22:57.154295] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:01.779 [2024-10-07 11:22:57.154369] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:01.779 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:12:01.779 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:01.779 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:12:01.779 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:12:01.779 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:12:01.779 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:01.779 00:12:01.779 real 0m0.408s 00:12:01.779 user 0m0.229s 00:12:01.779 sys 0m0.075s 00:12:01.779 ************************************ 00:12:01.779 END TEST dd_invalid_json 00:12:01.779 ************************************ 00:12:01.779 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:01.779 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:12:02.037 11:22:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:12:02.037 11:22:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:02.038 ************************************ 00:12:02.038 START TEST dd_invalid_seek 00:12:02.038 ************************************ 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:02.038 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:12:02.038 { 00:12:02.038 "subsystems": [ 00:12:02.038 { 00:12:02.038 "subsystem": "bdev", 00:12:02.038 "config": [ 00:12:02.038 { 00:12:02.038 "params": { 00:12:02.038 "block_size": 512, 00:12:02.038 "num_blocks": 512, 00:12:02.038 "name": "malloc0" 00:12:02.038 }, 00:12:02.038 "method": "bdev_malloc_create" 00:12:02.038 }, 00:12:02.038 { 00:12:02.038 "params": { 00:12:02.038 "block_size": 512, 00:12:02.038 "num_blocks": 512, 00:12:02.038 "name": "malloc1" 00:12:02.038 }, 00:12:02.038 "method": "bdev_malloc_create" 00:12:02.038 }, 00:12:02.038 { 00:12:02.038 "method": "bdev_wait_for_examine" 00:12:02.038 } 00:12:02.038 ] 00:12:02.038 } 00:12:02.038 ] 00:12:02.038 } 00:12:02.038 [2024-10-07 11:22:57.380212] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:12:02.038 [2024-10-07 11:22:57.380470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62072 ] 00:12:02.038 [2024-10-07 11:22:57.520065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.297 [2024-10-07 11:22:57.632989] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.297 [2024-10-07 11:22:57.690470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:02.297 [2024-10-07 11:22:57.762643] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:12:02.297 [2024-10-07 11:22:57.762744] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:02.555 [2024-10-07 11:22:57.895291] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:02.555 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:12:02.555 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:02.555 ************************************ 00:12:02.555 END TEST dd_invalid_seek 00:12:02.555 ************************************ 00:12:02.555 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:12:02.555 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:12:02.555 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:12:02.555 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:02.555 00:12:02.555 real 0m0.683s 00:12:02.555 user 0m0.450s 00:12:02.555 sys 0m0.185s 00:12:02.555 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:02.555 11:22:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:02.555 ************************************ 00:12:02.555 START TEST dd_invalid_skip 00:12:02.555 ************************************ 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:02.555 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:12:02.813 [2024-10-07 11:22:58.111160] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:12:02.813 [2024-10-07 11:22:58.111452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62105 ] 00:12:02.813 { 00:12:02.813 "subsystems": [ 00:12:02.813 { 00:12:02.813 "subsystem": "bdev", 00:12:02.813 "config": [ 00:12:02.813 { 00:12:02.813 "params": { 00:12:02.813 "block_size": 512, 00:12:02.813 "num_blocks": 512, 00:12:02.813 "name": "malloc0" 00:12:02.813 }, 00:12:02.813 "method": "bdev_malloc_create" 00:12:02.813 }, 00:12:02.813 { 00:12:02.813 "params": { 00:12:02.813 "block_size": 512, 00:12:02.813 "num_blocks": 512, 00:12:02.813 "name": "malloc1" 00:12:02.813 }, 00:12:02.813 "method": "bdev_malloc_create" 00:12:02.813 }, 00:12:02.813 { 00:12:02.813 "method": "bdev_wait_for_examine" 00:12:02.813 } 00:12:02.813 ] 00:12:02.813 } 00:12:02.813 ] 00:12:02.814 } 00:12:02.814 [2024-10-07 11:22:58.242374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.077 [2024-10-07 11:22:58.354858] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.077 [2024-10-07 11:22:58.410743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:03.077 [2024-10-07 11:22:58.476871] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:12:03.077 [2024-10-07 11:22:58.476940] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:03.077 [2024-10-07 11:22:58.598875] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:12:03.336 ************************************ 00:12:03.336 END TEST dd_invalid_skip 00:12:03.336 ************************************ 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:03.336 00:12:03.336 real 0m0.643s 00:12:03.336 user 0m0.451s 00:12:03.336 sys 0m0.173s 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:03.336 ************************************ 00:12:03.336 START TEST dd_invalid_input_count 00:12:03.336 ************************************ 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:03.336 11:22:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:12:03.336 { 00:12:03.336 "subsystems": [ 00:12:03.336 { 00:12:03.336 "subsystem": "bdev", 00:12:03.336 "config": [ 00:12:03.336 { 00:12:03.336 "params": { 00:12:03.336 "block_size": 512, 00:12:03.337 "num_blocks": 512, 00:12:03.337 "name": "malloc0" 00:12:03.337 }, 00:12:03.337 "method": "bdev_malloc_create" 00:12:03.337 }, 00:12:03.337 { 00:12:03.337 "params": { 00:12:03.337 "block_size": 512, 00:12:03.337 "num_blocks": 512, 00:12:03.337 "name": "malloc1" 00:12:03.337 }, 00:12:03.337 "method": "bdev_malloc_create" 00:12:03.337 }, 00:12:03.337 { 00:12:03.337 "method": "bdev_wait_for_examine" 00:12:03.337 } 00:12:03.337 ] 00:12:03.337 } 00:12:03.337 ] 00:12:03.337 } 00:12:03.337 [2024-10-07 11:22:58.810170] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:12:03.337 [2024-10-07 11:22:58.810262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62144 ] 00:12:03.595 [2024-10-07 11:22:58.956146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.595 [2024-10-07 11:22:59.070159] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.854 [2024-10-07 11:22:59.124944] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:03.854 [2024-10-07 11:22:59.187322] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:12:03.854 [2024-10-07 11:22:59.187415] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:03.854 [2024-10-07 11:22:59.310514] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:04.113 00:12:04.113 real 0m0.662s 00:12:04.113 user 0m0.441s 00:12:04.113 sys 0m0.171s 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:04.113 ************************************ 00:12:04.113 END TEST dd_invalid_input_count 00:12:04.113 ************************************ 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:04.113 ************************************ 00:12:04.113 START TEST dd_invalid_output_count 00:12:04.113 ************************************ 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:04.113 11:22:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:12:04.113 [2024-10-07 11:22:59.515364] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:12:04.113 [2024-10-07 11:22:59.515444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62178 ] 00:12:04.113 { 00:12:04.113 "subsystems": [ 00:12:04.113 { 00:12:04.113 "subsystem": "bdev", 00:12:04.113 "config": [ 00:12:04.113 { 00:12:04.113 "params": { 00:12:04.113 "block_size": 512, 00:12:04.113 "num_blocks": 512, 00:12:04.113 "name": "malloc0" 00:12:04.113 }, 00:12:04.113 "method": "bdev_malloc_create" 00:12:04.113 }, 00:12:04.113 { 00:12:04.113 "method": "bdev_wait_for_examine" 00:12:04.113 } 00:12:04.113 ] 00:12:04.113 } 00:12:04.113 ] 00:12:04.113 } 00:12:04.372 [2024-10-07 11:22:59.651888] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.372 [2024-10-07 11:22:59.765698] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.372 [2024-10-07 11:22:59.821617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:04.372 [2024-10-07 11:22:59.877502] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:12:04.372 [2024-10-07 11:22:59.877806] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:04.630 [2024-10-07 11:22:59.999657] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:04.630 11:23:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:12:04.630 11:23:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:04.630 11:23:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:12:04.630 11:23:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:12:04.630 11:23:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:12:04.630 11:23:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:04.630 00:12:04.630 real 0m0.641s 00:12:04.630 user 0m0.419s 00:12:04.630 sys 0m0.175s 00:12:04.630 ************************************ 00:12:04.630 END TEST dd_invalid_output_count 00:12:04.630 ************************************ 00:12:04.630 11:23:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:04.630 11:23:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:12:04.630 11:23:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:12:04.631 11:23:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:04.631 11:23:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:04.631 11:23:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:04.631 ************************************ 00:12:04.631 START TEST dd_bs_not_multiple 00:12:04.631 ************************************ 00:12:04.631 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:12:04.631 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:04.631 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:04.631 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:12:04.631 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:12:04.631 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:12:04.631 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:12:04.889 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:12:04.889 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:12:04.889 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:12:04.889 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:12:04.889 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:04.889 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:12:04.889 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:12:04.889 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.889 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:04.889 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.889 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:04.889 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.889 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:04.889 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:04.889 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:12:04.889 [2024-10-07 11:23:00.212215] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:12:04.889 [2024-10-07 11:23:00.212400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62209 ] 00:12:04.889 { 00:12:04.889 "subsystems": [ 00:12:04.889 { 00:12:04.889 "subsystem": "bdev", 00:12:04.889 "config": [ 00:12:04.889 { 00:12:04.889 "params": { 00:12:04.889 "block_size": 512, 00:12:04.889 "num_blocks": 512, 00:12:04.889 "name": "malloc0" 00:12:04.889 }, 00:12:04.889 "method": "bdev_malloc_create" 00:12:04.889 }, 00:12:04.889 { 00:12:04.889 "params": { 00:12:04.889 "block_size": 512, 00:12:04.889 "num_blocks": 512, 00:12:04.889 "name": "malloc1" 00:12:04.889 }, 00:12:04.889 "method": "bdev_malloc_create" 00:12:04.889 }, 00:12:04.889 { 00:12:04.889 "method": "bdev_wait_for_examine" 00:12:04.889 } 00:12:04.889 ] 00:12:04.889 } 00:12:04.889 ] 00:12:04.889 } 00:12:04.889 [2024-10-07 11:23:00.352550] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.147 [2024-10-07 11:23:00.469925] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.147 [2024-10-07 11:23:00.525607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:05.147 [2024-10-07 11:23:00.587780] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:12:05.147 [2024-10-07 11:23:00.587859] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:05.406 [2024-10-07 11:23:00.713662] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:05.406 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:12:05.406 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:05.406 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:12:05.406 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:12:05.406 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:12:05.406 ************************************ 00:12:05.406 END TEST dd_bs_not_multiple 00:12:05.406 ************************************ 00:12:05.406 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:05.406 00:12:05.406 real 0m0.663s 00:12:05.406 user 0m0.448s 00:12:05.406 sys 0m0.172s 00:12:05.406 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.406 11:23:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:12:05.406 ************************************ 00:12:05.406 END TEST spdk_dd_negative 00:12:05.406 ************************************ 00:12:05.406 00:12:05.406 real 0m7.400s 00:12:05.406 user 0m4.159s 00:12:05.406 sys 0m2.622s 00:12:05.406 11:23:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.406 11:23:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:05.406 ************************************ 00:12:05.406 END TEST spdk_dd 00:12:05.406 ************************************ 00:12:05.406 00:12:05.406 real 1m26.995s 00:12:05.406 user 0m56.950s 00:12:05.406 sys 0m37.007s 00:12:05.406 11:23:00 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.406 11:23:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:05.406 11:23:00 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:12:05.406 11:23:00 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:12:05.406 11:23:00 -- spdk/autotest.sh@256 -- # timing_exit lib 00:12:05.406 11:23:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:05.406 11:23:00 -- common/autotest_common.sh@10 -- # set +x 00:12:05.664 11:23:00 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:12:05.664 11:23:00 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:12:05.664 11:23:00 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:12:05.664 11:23:00 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:12:05.664 11:23:00 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:12:05.664 11:23:00 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:12:05.664 11:23:00 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:05.664 11:23:00 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:05.664 11:23:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.664 11:23:00 -- common/autotest_common.sh@10 -- # set +x 00:12:05.664 ************************************ 00:12:05.664 START TEST nvmf_tcp 00:12:05.664 ************************************ 00:12:05.664 11:23:00 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:05.664 * Looking for test storage... 00:12:05.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:05.664 11:23:01 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:05.664 11:23:01 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:05.664 11:23:01 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:12:05.664 11:23:01 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:12:05.664 11:23:01 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.665 11:23:01 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:12:05.665 11:23:01 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:12:05.665 11:23:01 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.665 11:23:01 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:12:05.665 11:23:01 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.665 11:23:01 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.665 11:23:01 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.665 11:23:01 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:12:05.665 11:23:01 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.665 11:23:01 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:05.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.665 --rc genhtml_branch_coverage=1 00:12:05.665 --rc genhtml_function_coverage=1 00:12:05.665 --rc genhtml_legend=1 00:12:05.665 --rc geninfo_all_blocks=1 00:12:05.665 --rc geninfo_unexecuted_blocks=1 00:12:05.665 00:12:05.665 ' 00:12:05.665 11:23:01 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:05.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.665 --rc genhtml_branch_coverage=1 00:12:05.665 --rc genhtml_function_coverage=1 00:12:05.665 --rc genhtml_legend=1 00:12:05.665 --rc geninfo_all_blocks=1 00:12:05.665 --rc geninfo_unexecuted_blocks=1 00:12:05.665 00:12:05.665 ' 00:12:05.665 11:23:01 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:05.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.665 --rc genhtml_branch_coverage=1 00:12:05.665 --rc genhtml_function_coverage=1 00:12:05.665 --rc genhtml_legend=1 00:12:05.665 --rc geninfo_all_blocks=1 00:12:05.665 --rc geninfo_unexecuted_blocks=1 00:12:05.665 00:12:05.665 ' 00:12:05.665 11:23:01 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:05.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.665 --rc genhtml_branch_coverage=1 00:12:05.665 --rc genhtml_function_coverage=1 00:12:05.665 --rc genhtml_legend=1 00:12:05.665 --rc geninfo_all_blocks=1 00:12:05.665 --rc geninfo_unexecuted_blocks=1 00:12:05.665 00:12:05.665 ' 00:12:05.665 11:23:01 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:12:05.665 11:23:01 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:05.665 11:23:01 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:12:05.665 11:23:01 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:05.665 11:23:01 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.665 11:23:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:05.665 ************************************ 00:12:05.665 START TEST nvmf_target_core 00:12:05.665 ************************************ 00:12:05.665 11:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:12:05.926 * Looking for test storage... 00:12:05.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:05.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.926 --rc genhtml_branch_coverage=1 00:12:05.926 --rc genhtml_function_coverage=1 00:12:05.926 --rc genhtml_legend=1 00:12:05.926 --rc geninfo_all_blocks=1 00:12:05.926 --rc geninfo_unexecuted_blocks=1 00:12:05.926 00:12:05.926 ' 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:05.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.926 --rc genhtml_branch_coverage=1 00:12:05.926 --rc genhtml_function_coverage=1 00:12:05.926 --rc genhtml_legend=1 00:12:05.926 --rc geninfo_all_blocks=1 00:12:05.926 --rc geninfo_unexecuted_blocks=1 00:12:05.926 00:12:05.926 ' 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:05.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.926 --rc genhtml_branch_coverage=1 00:12:05.926 --rc genhtml_function_coverage=1 00:12:05.926 --rc genhtml_legend=1 00:12:05.926 --rc geninfo_all_blocks=1 00:12:05.926 --rc geninfo_unexecuted_blocks=1 00:12:05.926 00:12:05.926 ' 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:05.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.926 --rc genhtml_branch_coverage=1 00:12:05.926 --rc genhtml_function_coverage=1 00:12:05.926 --rc genhtml_legend=1 00:12:05.926 --rc geninfo_all_blocks=1 00:12:05.926 --rc geninfo_unexecuted_blocks=1 00:12:05.926 00:12:05.926 ' 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.926 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.927 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:05.927 ************************************ 00:12:05.927 START TEST nvmf_host_management 00:12:05.927 ************************************ 00:12:05.927 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:06.187 * Looking for test storage... 00:12:06.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:06.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.187 --rc genhtml_branch_coverage=1 00:12:06.187 --rc genhtml_function_coverage=1 00:12:06.187 --rc genhtml_legend=1 00:12:06.187 --rc geninfo_all_blocks=1 00:12:06.187 --rc geninfo_unexecuted_blocks=1 00:12:06.187 00:12:06.187 ' 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:06.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.187 --rc genhtml_branch_coverage=1 00:12:06.187 --rc genhtml_function_coverage=1 00:12:06.187 --rc genhtml_legend=1 00:12:06.187 --rc geninfo_all_blocks=1 00:12:06.187 --rc geninfo_unexecuted_blocks=1 00:12:06.187 00:12:06.187 ' 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:06.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.187 --rc genhtml_branch_coverage=1 00:12:06.187 --rc genhtml_function_coverage=1 00:12:06.187 --rc genhtml_legend=1 00:12:06.187 --rc geninfo_all_blocks=1 00:12:06.187 --rc geninfo_unexecuted_blocks=1 00:12:06.187 00:12:06.187 ' 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:06.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.187 --rc genhtml_branch_coverage=1 00:12:06.187 --rc genhtml_function_coverage=1 00:12:06.187 --rc genhtml_legend=1 00:12:06.187 --rc geninfo_all_blocks=1 00:12:06.187 --rc geninfo_unexecuted_blocks=1 00:12:06.187 00:12:06.187 ' 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:06.187 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # nvmf_veth_init 00:12:06.187 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:06.188 Cannot find device "nvmf_init_br" 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:06.188 Cannot find device "nvmf_init_br2" 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:06.188 Cannot find device "nvmf_tgt_br" 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:06.188 Cannot find device "nvmf_tgt_br2" 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:06.188 Cannot find device "nvmf_init_br" 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:06.188 Cannot find device "nvmf_init_br2" 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:06.188 Cannot find device "nvmf_tgt_br" 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:06.188 Cannot find device "nvmf_tgt_br2" 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:12:06.188 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:06.188 Cannot find device "nvmf_br" 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:06.445 Cannot find device "nvmf_init_if" 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:06.445 Cannot find device "nvmf_init_if2" 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:06.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:06.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:06.445 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:06.704 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:06.704 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:06.704 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:06.704 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:06.704 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:06.704 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms 00:12:06.704 00:12:06.704 --- 10.0.0.3 ping statistics --- 00:12:06.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.704 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:06.704 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:06.704 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:12:06.704 00:12:06.704 --- 10.0.0.4 ping statistics --- 00:12:06.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.704 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:06.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:12:06.704 00:12:06.704 --- 10.0.0.1 ping statistics --- 00:12:06.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.704 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:06.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:12:06.704 00:12:06.704 --- 10.0.0.2 ping statistics --- 00:12:06.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.704 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # return 0 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=62557 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 62557 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 62557 ']' 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:06.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:06.704 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:06.704 [2024-10-07 11:23:02.125506] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:12:06.704 [2024-10-07 11:23:02.125596] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.962 [2024-10-07 11:23:02.261519] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.962 [2024-10-07 11:23:02.395182] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.962 [2024-10-07 11:23:02.395249] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.962 [2024-10-07 11:23:02.395264] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.962 [2024-10-07 11:23:02.395275] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.962 [2024-10-07 11:23:02.395284] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.962 [2024-10-07 11:23:02.396702] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.962 [2024-10-07 11:23:02.396825] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:12:06.962 [2024-10-07 11:23:02.396753] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.962 [2024-10-07 11:23:02.396830] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.962 [2024-10-07 11:23:02.456917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:07.894 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:07.894 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:12:07.894 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:07.894 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:07.894 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:07.894 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.894 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:07.894 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.894 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:07.894 [2024-10-07 11:23:03.257970] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.894 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.894 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:07.894 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:07.894 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:07.894 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:07.894 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:07.894 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:07.895 Malloc0 00:12:07.895 [2024-10-07 11:23:03.326438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:07.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62617 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62617 /var/tmp/bdevperf.sock 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 62617 ']' 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:07.895 { 00:12:07.895 "params": { 00:12:07.895 "name": "Nvme$subsystem", 00:12:07.895 "trtype": "$TEST_TRANSPORT", 00:12:07.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:07.895 "adrfam": "ipv4", 00:12:07.895 "trsvcid": "$NVMF_PORT", 00:12:07.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:07.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:07.895 "hdgst": ${hdgst:-false}, 00:12:07.895 "ddgst": ${ddgst:-false} 00:12:07.895 }, 00:12:07.895 "method": "bdev_nvme_attach_controller" 00:12:07.895 } 00:12:07.895 EOF 00:12:07.895 )") 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:12:07.895 11:23:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:07.895 "params": { 00:12:07.895 "name": "Nvme0", 00:12:07.895 "trtype": "tcp", 00:12:07.895 "traddr": "10.0.0.3", 00:12:07.895 "adrfam": "ipv4", 00:12:07.895 "trsvcid": "4420", 00:12:07.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:07.895 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:07.895 "hdgst": false, 00:12:07.895 "ddgst": false 00:12:07.895 }, 00:12:07.895 "method": "bdev_nvme_attach_controller" 00:12:07.895 }' 00:12:07.895 [2024-10-07 11:23:03.416431] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:12:07.895 [2024-10-07 11:23:03.416515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62617 ] 00:12:08.153 [2024-10-07 11:23:03.551438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.410 [2024-10-07 11:23:03.674488] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.410 [2024-10-07 11:23:03.743998] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:08.410 Running I/O for 10 seconds... 00:12:08.977 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:08.977 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:12:08.977 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:08.977 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.977 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:08.977 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.977 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:08.977 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:08.977 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:08.977 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:08.977 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:08.977 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:08.977 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:08.977 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.238 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:09.238 task offset: 0 on job bdev=Nvme0n1 fails 00:12:09.238 00:12:09.238 Latency(us) 00:12:09.238 [2024-10-07T11:23:04.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:09.238 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:09.238 Job: Nvme0n1 ended in about 0.71 seconds with error 00:12:09.238 Verification LBA range: start 0x0 length 0x400 00:12:09.238 Nvme0n1 : 0.71 1448.31 90.52 90.52 0.00 40365.55 2278.87 46947.61 00:12:09.238 [2024-10-07T11:23:04.761Z] =================================================================================================================== 00:12:09.238 [2024-10-07T11:23:04.761Z] Total : 1448.31 90.52 90.52 0.00 40365.55 2278.87 46947.61 00:12:09.238 [2024-10-07 11:23:04.577347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.577980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.577990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.238 [2024-10-07 11:23:04.578669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.238 [2024-10-07 11:23:04.578679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.239 [2024-10-07 11:23:04.578690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.239 [2024-10-07 11:23:04.578700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.239 [2024-10-07 11:23:04.578711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.239 [2024-10-07 11:23:04.578721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.239 [2024-10-07 11:23:04.578732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.239 [2024-10-07 11:23:04.578742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.239 [2024-10-07 11:23:04.578754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.239 [2024-10-07 11:23:04.578765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.239 [2024-10-07 11:23:04.578776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.239 [2024-10-07 11:23:04.578786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.239 [2024-10-07 11:23:04.578798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.239 [2024-10-07 11:23:04.578812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.239 [2024-10-07 11:23:04.578824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.239 [2024-10-07 11:23:04.578834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.239 [2024-10-07 11:23:04.578845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.239 [2024-10-07 11:23:04.578855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.239 [2024-10-07 11:23:04.578867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.239 [2024-10-07 11:23:04.578877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.239 [2024-10-07 11:23:04.578887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69c6b0 is same with the state(6) to be set 00:12:09.239 [2024-10-07 11:23:04.578968] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x69c6b0 was disconnected and freed. reset controller. 00:12:09.239 [2024-10-07 11:23:04.579079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.239 [2024-10-07 11:23:04.579096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.239 [2024-10-07 11:23:04.579107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.239 [2024-10-07 11:23:04.579117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.239 [2024-10-07 11:23:04.579132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.239 [2024-10-07 11:23:04.579142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.239 [2024-10-07 11:23:04.579152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.239 [2024-10-07 11:23:04.579161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.239 [2024-10-07 11:23:04.579171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69cb20 is same with the state(6) to be set 00:12:09.239 [2024-10-07 11:23:04.580266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:09.239 [2024-10-07 11:23:04.582221] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:09.239 [2024-10-07 11:23:04.582247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69cb20 (9): Bad file descriptor 00:12:09.239 [2024-10-07 11:23:04.588936] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:10.209 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62617 00:12:10.209 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62617) - No such process 00:12:10.209 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:10.209 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:10.209 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:10.209 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:10.209 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:12:10.209 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:12:10.209 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:10.209 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:10.209 { 00:12:10.209 "params": { 00:12:10.209 "name": "Nvme$subsystem", 00:12:10.209 "trtype": "$TEST_TRANSPORT", 00:12:10.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:10.209 "adrfam": "ipv4", 00:12:10.209 "trsvcid": "$NVMF_PORT", 00:12:10.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:10.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:10.209 "hdgst": ${hdgst:-false}, 00:12:10.209 "ddgst": ${ddgst:-false} 00:12:10.209 }, 00:12:10.209 "method": "bdev_nvme_attach_controller" 00:12:10.209 } 00:12:10.209 EOF 00:12:10.209 )") 00:12:10.209 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:12:10.209 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:12:10.209 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:12:10.209 11:23:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:10.209 "params": { 00:12:10.209 "name": "Nvme0", 00:12:10.209 "trtype": "tcp", 00:12:10.209 "traddr": "10.0.0.3", 00:12:10.209 "adrfam": "ipv4", 00:12:10.209 "trsvcid": "4420", 00:12:10.209 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:10.209 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:10.209 "hdgst": false, 00:12:10.209 "ddgst": false 00:12:10.209 }, 00:12:10.209 "method": "bdev_nvme_attach_controller" 00:12:10.209 }' 00:12:10.209 [2024-10-07 11:23:05.638607] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:12:10.209 [2024-10-07 11:23:05.638724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62655 ] 00:12:10.467 [2024-10-07 11:23:05.781825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.467 [2024-10-07 11:23:05.906939] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.467 [2024-10-07 11:23:05.973923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:10.724 Running I/O for 1 seconds... 00:12:11.658 1472.00 IOPS, 92.00 MiB/s 00:12:11.658 Latency(us) 00:12:11.658 [2024-10-07T11:23:07.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.658 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:11.658 Verification LBA range: start 0x0 length 0x400 00:12:11.658 Nvme0n1 : 1.01 1524.55 95.28 0.00 0.00 41135.35 4259.84 39798.23 00:12:11.658 [2024-10-07T11:23:07.181Z] =================================================================================================================== 00:12:11.658 [2024-10-07T11:23:07.181Z] Total : 1524.55 95.28 0.00 0.00 41135.35 4259.84 39798.23 00:12:11.915 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:11.915 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:11.915 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:12:11.915 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:11.915 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:11.915 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:11.915 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:12:11.915 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:11.915 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:12:11.915 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:11.915 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:11.915 rmmod nvme_tcp 00:12:11.915 rmmod nvme_fabrics 00:12:11.915 rmmod nvme_keyring 00:12:12.173 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:12.173 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:12:12.173 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:12:12.173 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 62557 ']' 00:12:12.173 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 62557 00:12:12.173 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 62557 ']' 00:12:12.173 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 62557 00:12:12.173 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:12:12.173 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:12.173 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62557 00:12:12.173 killing process with pid 62557 00:12:12.173 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:12.173 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:12.173 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62557' 00:12:12.173 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 62557 00:12:12.173 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 62557 00:12:12.431 [2024-10-07 11:23:07.715764] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.431 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.691 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:12:12.691 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:12.691 00:12:12.691 real 0m6.584s 00:12:12.691 user 0m24.391s 00:12:12.691 sys 0m1.675s 00:12:12.691 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.691 11:23:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:12.691 ************************************ 00:12:12.691 END TEST nvmf_host_management 00:12:12.691 ************************************ 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:12.691 ************************************ 00:12:12.691 START TEST nvmf_lvol 00:12:12.691 ************************************ 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:12.691 * Looking for test storage... 00:12:12.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:12.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.691 --rc genhtml_branch_coverage=1 00:12:12.691 --rc genhtml_function_coverage=1 00:12:12.691 --rc genhtml_legend=1 00:12:12.691 --rc geninfo_all_blocks=1 00:12:12.691 --rc geninfo_unexecuted_blocks=1 00:12:12.691 00:12:12.691 ' 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:12.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.691 --rc genhtml_branch_coverage=1 00:12:12.691 --rc genhtml_function_coverage=1 00:12:12.691 --rc genhtml_legend=1 00:12:12.691 --rc geninfo_all_blocks=1 00:12:12.691 --rc geninfo_unexecuted_blocks=1 00:12:12.691 00:12:12.691 ' 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:12.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.691 --rc genhtml_branch_coverage=1 00:12:12.691 --rc genhtml_function_coverage=1 00:12:12.691 --rc genhtml_legend=1 00:12:12.691 --rc geninfo_all_blocks=1 00:12:12.691 --rc geninfo_unexecuted_blocks=1 00:12:12.691 00:12:12.691 ' 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:12.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.691 --rc genhtml_branch_coverage=1 00:12:12.691 --rc genhtml_function_coverage=1 00:12:12.691 --rc genhtml_legend=1 00:12:12.691 --rc geninfo_all_blocks=1 00:12:12.691 --rc geninfo_unexecuted_blocks=1 00:12:12.691 00:12:12.691 ' 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.691 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.951 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:12:12.951 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:12:12.951 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.951 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.951 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:12.951 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.951 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:12.951 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.951 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.951 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.951 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.951 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.951 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.952 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # nvmf_veth_init 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:12.952 Cannot find device "nvmf_init_br" 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:12.952 Cannot find device "nvmf_init_br2" 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:12.952 Cannot find device "nvmf_tgt_br" 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:12.952 Cannot find device "nvmf_tgt_br2" 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:12.952 Cannot find device "nvmf_init_br" 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:12.952 Cannot find device "nvmf_init_br2" 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:12.952 Cannot find device "nvmf_tgt_br" 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:12.952 Cannot find device "nvmf_tgt_br2" 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:12.952 Cannot find device "nvmf_br" 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:12.952 Cannot find device "nvmf_init_if" 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:12.952 Cannot find device "nvmf_init_if2" 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:12.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:12:12.952 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:12.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:12.953 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:12:12.953 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:12.953 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:12.953 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:12.953 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:12.953 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:12.953 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:12.953 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:12.953 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:13.212 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:13.212 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:12:13.212 00:12:13.212 --- 10.0.0.3 ping statistics --- 00:12:13.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.212 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:13.212 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:13.212 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:12:13.212 00:12:13.212 --- 10.0.0.4 ping statistics --- 00:12:13.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.212 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:13.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:13.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:12:13.212 00:12:13.212 --- 10.0.0.1 ping statistics --- 00:12:13.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.212 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:13.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:13.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:12:13.212 00:12:13.212 --- 10.0.0.2 ping statistics --- 00:12:13.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.212 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # return 0 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=62924 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 62924 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 62924 ']' 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:13.212 11:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:13.470 [2024-10-07 11:23:08.735591] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:12:13.470 [2024-10-07 11:23:08.735724] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.470 [2024-10-07 11:23:08.879953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:13.728 [2024-10-07 11:23:09.016175] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.728 [2024-10-07 11:23:09.016246] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.728 [2024-10-07 11:23:09.016271] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.728 [2024-10-07 11:23:09.016282] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.728 [2024-10-07 11:23:09.016291] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.728 [2024-10-07 11:23:09.016901] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.728 [2024-10-07 11:23:09.017343] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.728 [2024-10-07 11:23:09.017344] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.728 [2024-10-07 11:23:09.075062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:14.293 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:14.293 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:12:14.293 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:14.293 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:14.293 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:14.293 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.293 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:14.861 [2024-10-07 11:23:10.130854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.861 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:15.118 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:15.119 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:15.379 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:15.379 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:15.637 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:16.220 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0de4ced2-b244-4a47-b6f1-d0dc0772d0ad 00:12:16.220 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0de4ced2-b244-4a47-b6f1-d0dc0772d0ad lvol 20 00:12:16.478 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3a5cf27a-fb83-4cd5-899e-36dacaa8209e 00:12:16.478 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:16.736 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3a5cf27a-fb83-4cd5-899e-36dacaa8209e 00:12:16.994 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:12:17.252 [2024-10-07 11:23:12.582057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:17.252 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:17.510 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=63005 00:12:17.510 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:17.510 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:18.445 11:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 3a5cf27a-fb83-4cd5-899e-36dacaa8209e MY_SNAPSHOT 00:12:19.044 11:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e04ad542-e258-4927-8a6f-8644de0e2eed 00:12:19.044 11:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 3a5cf27a-fb83-4cd5-899e-36dacaa8209e 30 00:12:19.307 11:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone e04ad542-e258-4927-8a6f-8644de0e2eed MY_CLONE 00:12:19.565 11:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0a4c6f3d-74d8-4fd8-9f19-2ca4c7dc6130 00:12:19.565 11:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 0a4c6f3d-74d8-4fd8-9f19-2ca4c7dc6130 00:12:20.131 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 63005 00:12:28.240 Initializing NVMe Controllers 00:12:28.240 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:12:28.240 Controller IO queue size 128, less than required. 00:12:28.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:28.240 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:28.240 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:28.240 Initialization complete. Launching workers. 00:12:28.240 ======================================================== 00:12:28.240 Latency(us) 00:12:28.240 Device Information : IOPS MiB/s Average min max 00:12:28.240 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9620.50 37.58 13316.39 2340.30 88610.73 00:12:28.240 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9548.00 37.30 13414.43 1891.69 66245.23 00:12:28.240 ======================================================== 00:12:28.240 Total : 19168.50 74.88 13365.23 1891.69 88610.73 00:12:28.240 00:12:28.240 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:28.240 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3a5cf27a-fb83-4cd5-899e-36dacaa8209e 00:12:28.499 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0de4ced2-b244-4a47-b6f1-d0dc0772d0ad 00:12:28.767 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:28.767 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:28.767 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:28.767 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:28.767 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:12:28.767 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:28.767 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:12:28.767 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:28.767 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:28.767 rmmod nvme_tcp 00:12:29.026 rmmod nvme_fabrics 00:12:29.026 rmmod nvme_keyring 00:12:29.026 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:29.026 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:12:29.026 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:12:29.026 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 62924 ']' 00:12:29.026 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 62924 00:12:29.026 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 62924 ']' 00:12:29.026 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 62924 00:12:29.026 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:12:29.026 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:29.026 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62924 00:12:29.026 killing process with pid 62924 00:12:29.026 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:29.026 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:29.026 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62924' 00:12:29.026 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 62924 00:12:29.026 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 62924 00:12:29.286 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:29.286 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:29.286 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:29.286 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:12:29.286 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:12:29.286 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:29.286 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:12:29.286 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:29.286 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:29.286 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:29.286 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:29.286 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:29.286 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:29.286 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:29.286 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:29.286 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:29.286 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:29.286 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:29.546 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:29.546 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:29.546 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:29.546 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:29.546 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:29.546 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.546 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.546 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.546 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:12:29.546 00:12:29.546 real 0m16.904s 00:12:29.546 user 1m8.632s 00:12:29.546 sys 0m4.314s 00:12:29.546 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.546 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:29.546 ************************************ 00:12:29.546 END TEST nvmf_lvol 00:12:29.546 ************************************ 00:12:29.546 11:23:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:29.546 11:23:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:29.546 11:23:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.546 11:23:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:29.546 ************************************ 00:12:29.546 START TEST nvmf_lvs_grow 00:12:29.546 ************************************ 00:12:29.546 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:29.546 * Looking for test storage... 00:12:29.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:29.546 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:29.546 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:12:29.546 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:29.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.805 --rc genhtml_branch_coverage=1 00:12:29.805 --rc genhtml_function_coverage=1 00:12:29.805 --rc genhtml_legend=1 00:12:29.805 --rc geninfo_all_blocks=1 00:12:29.805 --rc geninfo_unexecuted_blocks=1 00:12:29.805 00:12:29.805 ' 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:29.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.805 --rc genhtml_branch_coverage=1 00:12:29.805 --rc genhtml_function_coverage=1 00:12:29.805 --rc genhtml_legend=1 00:12:29.805 --rc geninfo_all_blocks=1 00:12:29.805 --rc geninfo_unexecuted_blocks=1 00:12:29.805 00:12:29.805 ' 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:29.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.805 --rc genhtml_branch_coverage=1 00:12:29.805 --rc genhtml_function_coverage=1 00:12:29.805 --rc genhtml_legend=1 00:12:29.805 --rc geninfo_all_blocks=1 00:12:29.805 --rc geninfo_unexecuted_blocks=1 00:12:29.805 00:12:29.805 ' 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:29.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.805 --rc genhtml_branch_coverage=1 00:12:29.805 --rc genhtml_function_coverage=1 00:12:29.805 --rc genhtml_legend=1 00:12:29.805 --rc geninfo_all_blocks=1 00:12:29.805 --rc geninfo_unexecuted_blocks=1 00:12:29.805 00:12:29.805 ' 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.805 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.806 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # nvmf_veth_init 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:29.806 Cannot find device "nvmf_init_br" 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:29.806 Cannot find device "nvmf_init_br2" 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:29.806 Cannot find device "nvmf_tgt_br" 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:29.806 Cannot find device "nvmf_tgt_br2" 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:29.806 Cannot find device "nvmf_init_br" 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:29.806 Cannot find device "nvmf_init_br2" 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:29.806 Cannot find device "nvmf_tgt_br" 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:29.806 Cannot find device "nvmf_tgt_br2" 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:29.806 Cannot find device "nvmf_br" 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:29.806 Cannot find device "nvmf_init_if" 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:12:29.806 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:30.065 Cannot find device "nvmf_init_if2" 00:12:30.065 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:30.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:30.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:30.066 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:30.066 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:12:30.066 00:12:30.066 --- 10.0.0.3 ping statistics --- 00:12:30.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.066 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:30.066 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:30.066 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:12:30.066 00:12:30.066 --- 10.0.0.4 ping statistics --- 00:12:30.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.066 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:30.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:12:30.066 00:12:30.066 --- 10.0.0.1 ping statistics --- 00:12:30.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.066 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:30.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:12:30.066 00:12:30.066 --- 10.0.0.2 ping statistics --- 00:12:30.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.066 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # return 0 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:30.066 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:30.325 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:30.325 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:30.325 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:30.325 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:30.325 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=63392 00:12:30.325 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:30.325 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 63392 00:12:30.325 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 63392 ']' 00:12:30.325 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.325 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:30.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.325 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.325 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:30.325 11:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:30.325 [2024-10-07 11:23:25.665896] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:12:30.325 [2024-10-07 11:23:25.666037] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.325 [2024-10-07 11:23:25.818172] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.583 [2024-10-07 11:23:25.943333] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.583 [2024-10-07 11:23:25.943643] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.583 [2024-10-07 11:23:25.943764] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.583 [2024-10-07 11:23:25.943876] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.583 [2024-10-07 11:23:25.943976] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.583 [2024-10-07 11:23:25.944538] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.583 [2024-10-07 11:23:26.002705] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:31.519 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:31.519 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:12:31.519 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:31.519 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:31.519 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:31.519 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.519 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:31.786 [2024-10-07 11:23:27.056299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.786 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:31.786 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:31.786 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.786 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:31.786 ************************************ 00:12:31.786 START TEST lvs_grow_clean 00:12:31.786 ************************************ 00:12:31.786 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:12:31.786 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:31.786 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:31.786 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:31.786 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:31.786 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:31.786 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:31.786 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:31.786 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:31.786 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:32.045 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:32.045 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:32.304 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=fb166428-f1b5-4f57-a68e-0ed1a7db807c 00:12:32.304 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:32.304 11:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb166428-f1b5-4f57-a68e-0ed1a7db807c 00:12:32.563 11:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:32.563 11:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:32.563 11:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fb166428-f1b5-4f57-a68e-0ed1a7db807c lvol 150 00:12:33.130 11:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0da238a2-bcae-4edf-a8f2-47dc8837f140 00:12:33.130 11:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:33.130 11:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:33.130 [2024-10-07 11:23:28.643269] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:33.130 [2024-10-07 11:23:28.643409] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:33.130 true 00:12:33.389 11:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb166428-f1b5-4f57-a68e-0ed1a7db807c 00:12:33.389 11:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:33.647 11:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:33.647 11:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:33.906 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0da238a2-bcae-4edf-a8f2-47dc8837f140 00:12:34.165 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:12:34.425 [2024-10-07 11:23:29.723873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:34.425 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:34.687 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63486 00:12:34.687 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:34.687 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:34.687 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63486 /var/tmp/bdevperf.sock 00:12:34.687 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 63486 ']' 00:12:34.687 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:34.687 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:34.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:34.687 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:34.687 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:34.687 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:34.687 [2024-10-07 11:23:30.055146] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:12:34.687 [2024-10-07 11:23:30.055255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63486 ] 00:12:34.687 [2024-10-07 11:23:30.198236] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.945 [2024-10-07 11:23:30.328794] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.945 [2024-10-07 11:23:30.386004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:35.879 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:35.879 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:12:35.879 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:35.879 Nvme0n1 00:12:35.879 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:36.137 [ 00:12:36.137 { 00:12:36.137 "name": "Nvme0n1", 00:12:36.137 "aliases": [ 00:12:36.137 "0da238a2-bcae-4edf-a8f2-47dc8837f140" 00:12:36.137 ], 00:12:36.137 "product_name": "NVMe disk", 00:12:36.137 "block_size": 4096, 00:12:36.137 "num_blocks": 38912, 00:12:36.137 "uuid": "0da238a2-bcae-4edf-a8f2-47dc8837f140", 00:12:36.137 "numa_id": -1, 00:12:36.137 "assigned_rate_limits": { 00:12:36.137 "rw_ios_per_sec": 0, 00:12:36.137 "rw_mbytes_per_sec": 0, 00:12:36.137 "r_mbytes_per_sec": 0, 00:12:36.137 "w_mbytes_per_sec": 0 00:12:36.137 }, 00:12:36.137 "claimed": false, 00:12:36.137 "zoned": false, 00:12:36.137 "supported_io_types": { 00:12:36.137 "read": true, 00:12:36.137 "write": true, 00:12:36.137 "unmap": true, 00:12:36.137 "flush": true, 00:12:36.137 "reset": true, 00:12:36.137 "nvme_admin": true, 00:12:36.137 "nvme_io": true, 00:12:36.137 "nvme_io_md": false, 00:12:36.137 "write_zeroes": true, 00:12:36.137 "zcopy": false, 00:12:36.137 "get_zone_info": false, 00:12:36.137 "zone_management": false, 00:12:36.137 "zone_append": false, 00:12:36.137 "compare": true, 00:12:36.137 "compare_and_write": true, 00:12:36.137 "abort": true, 00:12:36.137 "seek_hole": false, 00:12:36.137 "seek_data": false, 00:12:36.137 "copy": true, 00:12:36.138 "nvme_iov_md": false 00:12:36.138 }, 00:12:36.138 "memory_domains": [ 00:12:36.138 { 00:12:36.138 "dma_device_id": "system", 00:12:36.138 "dma_device_type": 1 00:12:36.138 } 00:12:36.138 ], 00:12:36.138 "driver_specific": { 00:12:36.138 "nvme": [ 00:12:36.138 { 00:12:36.138 "trid": { 00:12:36.138 "trtype": "TCP", 00:12:36.138 "adrfam": "IPv4", 00:12:36.138 "traddr": "10.0.0.3", 00:12:36.138 "trsvcid": "4420", 00:12:36.138 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:36.138 }, 00:12:36.138 "ctrlr_data": { 00:12:36.138 "cntlid": 1, 00:12:36.138 "vendor_id": "0x8086", 00:12:36.138 "model_number": "SPDK bdev Controller", 00:12:36.138 "serial_number": "SPDK0", 00:12:36.138 "firmware_revision": "25.01", 00:12:36.138 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:36.138 "oacs": { 00:12:36.138 "security": 0, 00:12:36.138 "format": 0, 00:12:36.138 "firmware": 0, 00:12:36.138 "ns_manage": 0 00:12:36.138 }, 00:12:36.138 "multi_ctrlr": true, 00:12:36.138 "ana_reporting": false 00:12:36.138 }, 00:12:36.138 "vs": { 00:12:36.138 "nvme_version": "1.3" 00:12:36.138 }, 00:12:36.138 "ns_data": { 00:12:36.138 "id": 1, 00:12:36.138 "can_share": true 00:12:36.138 } 00:12:36.138 } 00:12:36.138 ], 00:12:36.138 "mp_policy": "active_passive" 00:12:36.138 } 00:12:36.138 } 00:12:36.138 ] 00:12:36.138 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63509 00:12:36.138 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:36.138 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:36.396 Running I/O for 10 seconds... 00:12:37.334 Latency(us) 00:12:37.334 [2024-10-07T11:23:32.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:37.334 Nvme0n1 : 1.00 7290.00 28.48 0.00 0.00 0.00 0.00 0.00 00:12:37.334 [2024-10-07T11:23:32.857Z] =================================================================================================================== 00:12:37.334 [2024-10-07T11:23:32.857Z] Total : 7290.00 28.48 0.00 0.00 0.00 0.00 0.00 00:12:37.334 00:12:38.309 11:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fb166428-f1b5-4f57-a68e-0ed1a7db807c 00:12:38.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:38.309 Nvme0n1 : 2.00 7328.00 28.62 0.00 0.00 0.00 0.00 0.00 00:12:38.309 [2024-10-07T11:23:33.832Z] =================================================================================================================== 00:12:38.309 [2024-10-07T11:23:33.832Z] Total : 7328.00 28.62 0.00 0.00 0.00 0.00 0.00 00:12:38.309 00:12:38.567 true 00:12:38.567 11:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb166428-f1b5-4f57-a68e-0ed1a7db807c 00:12:38.567 11:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:38.826 11:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:38.826 11:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:38.826 11:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63509 00:12:39.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:39.393 Nvme0n1 : 3.00 7298.33 28.51 0.00 0.00 0.00 0.00 0.00 00:12:39.393 [2024-10-07T11:23:34.916Z] =================================================================================================================== 00:12:39.393 [2024-10-07T11:23:34.916Z] Total : 7298.33 28.51 0.00 0.00 0.00 0.00 0.00 00:12:39.393 00:12:40.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.365 Nvme0n1 : 4.00 7283.50 28.45 0.00 0.00 0.00 0.00 0.00 00:12:40.365 [2024-10-07T11:23:35.888Z] =================================================================================================================== 00:12:40.365 [2024-10-07T11:23:35.888Z] Total : 7283.50 28.45 0.00 0.00 0.00 0.00 0.00 00:12:40.365 00:12:41.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:41.299 Nvme0n1 : 5.00 7223.80 28.22 0.00 0.00 0.00 0.00 0.00 00:12:41.299 [2024-10-07T11:23:36.822Z] =================================================================================================================== 00:12:41.299 [2024-10-07T11:23:36.822Z] Total : 7223.80 28.22 0.00 0.00 0.00 0.00 0.00 00:12:41.299 00:12:42.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:42.232 Nvme0n1 : 6.00 7093.67 27.71 0.00 0.00 0.00 0.00 0.00 00:12:42.232 [2024-10-07T11:23:37.755Z] =================================================================================================================== 00:12:42.232 [2024-10-07T11:23:37.755Z] Total : 7093.67 27.71 0.00 0.00 0.00 0.00 0.00 00:12:42.232 00:12:43.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:43.605 Nvme0n1 : 7.00 7060.00 27.58 0.00 0.00 0.00 0.00 0.00 00:12:43.605 [2024-10-07T11:23:39.128Z] =================================================================================================================== 00:12:43.605 [2024-10-07T11:23:39.128Z] Total : 7060.00 27.58 0.00 0.00 0.00 0.00 0.00 00:12:43.605 00:12:44.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:44.537 Nvme0n1 : 8.00 7050.62 27.54 0.00 0.00 0.00 0.00 0.00 00:12:44.537 [2024-10-07T11:23:40.060Z] =================================================================================================================== 00:12:44.537 [2024-10-07T11:23:40.060Z] Total : 7050.62 27.54 0.00 0.00 0.00 0.00 0.00 00:12:44.537 00:12:45.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:45.471 Nvme0n1 : 9.00 7043.33 27.51 0.00 0.00 0.00 0.00 0.00 00:12:45.471 [2024-10-07T11:23:40.994Z] =================================================================================================================== 00:12:45.471 [2024-10-07T11:23:40.994Z] Total : 7043.33 27.51 0.00 0.00 0.00 0.00 0.00 00:12:45.471 00:12:46.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:46.405 Nvme0n1 : 10.00 7024.80 27.44 0.00 0.00 0.00 0.00 0.00 00:12:46.405 [2024-10-07T11:23:41.928Z] =================================================================================================================== 00:12:46.405 [2024-10-07T11:23:41.928Z] Total : 7024.80 27.44 0.00 0.00 0.00 0.00 0.00 00:12:46.405 00:12:46.405 00:12:46.405 Latency(us) 00:12:46.405 [2024-10-07T11:23:41.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:46.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:46.405 Nvme0n1 : 10.02 7023.16 27.43 0.00 0.00 18219.77 5451.40 118203.11 00:12:46.405 [2024-10-07T11:23:41.928Z] =================================================================================================================== 00:12:46.405 [2024-10-07T11:23:41.928Z] Total : 7023.16 27.43 0.00 0.00 18219.77 5451.40 118203.11 00:12:46.405 { 00:12:46.405 "results": [ 00:12:46.405 { 00:12:46.405 "job": "Nvme0n1", 00:12:46.405 "core_mask": "0x2", 00:12:46.405 "workload": "randwrite", 00:12:46.405 "status": "finished", 00:12:46.405 "queue_depth": 128, 00:12:46.405 "io_size": 4096, 00:12:46.405 "runtime": 10.020562, 00:12:46.405 "iops": 7023.158980504287, 00:12:46.405 "mibps": 27.43421476759487, 00:12:46.405 "io_failed": 0, 00:12:46.405 "io_timeout": 0, 00:12:46.405 "avg_latency_us": 18219.774949724597, 00:12:46.405 "min_latency_us": 5451.403636363636, 00:12:46.405 "max_latency_us": 118203.11272727273 00:12:46.405 } 00:12:46.405 ], 00:12:46.405 "core_count": 1 00:12:46.405 } 00:12:46.405 11:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63486 00:12:46.405 11:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 63486 ']' 00:12:46.405 11:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 63486 00:12:46.405 11:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:12:46.405 11:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:46.405 11:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63486 00:12:46.405 11:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:46.405 11:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:46.405 killing process with pid 63486 00:12:46.405 11:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63486' 00:12:46.405 11:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 63486 00:12:46.405 Received shutdown signal, test time was about 10.000000 seconds 00:12:46.405 00:12:46.405 Latency(us) 00:12:46.405 [2024-10-07T11:23:41.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:46.405 [2024-10-07T11:23:41.928Z] =================================================================================================================== 00:12:46.405 [2024-10-07T11:23:41.928Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:46.405 11:23:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 63486 00:12:46.664 11:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:46.922 11:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:47.180 11:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb166428-f1b5-4f57-a68e-0ed1a7db807c 00:12:47.180 11:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:47.745 11:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:47.745 11:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:47.745 11:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:47.745 [2024-10-07 11:23:43.267970] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:48.003 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb166428-f1b5-4f57-a68e-0ed1a7db807c 00:12:48.003 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:12:48.003 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb166428-f1b5-4f57-a68e-0ed1a7db807c 00:12:48.003 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:48.003 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.003 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:48.003 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.003 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:48.003 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.003 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:48.003 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:48.003 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb166428-f1b5-4f57-a68e-0ed1a7db807c 00:12:48.261 request: 00:12:48.261 { 00:12:48.261 "uuid": "fb166428-f1b5-4f57-a68e-0ed1a7db807c", 00:12:48.261 "method": "bdev_lvol_get_lvstores", 00:12:48.261 "req_id": 1 00:12:48.261 } 00:12:48.261 Got JSON-RPC error response 00:12:48.261 response: 00:12:48.261 { 00:12:48.261 "code": -19, 00:12:48.261 "message": "No such device" 00:12:48.261 } 00:12:48.261 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:12:48.261 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:48.261 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:48.261 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:48.261 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:48.519 aio_bdev 00:12:48.519 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0da238a2-bcae-4edf-a8f2-47dc8837f140 00:12:48.519 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=0da238a2-bcae-4edf-a8f2-47dc8837f140 00:12:48.519 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:48.519 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:12:48.519 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:48.519 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:48.519 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:48.778 11:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0da238a2-bcae-4edf-a8f2-47dc8837f140 -t 2000 00:12:49.036 [ 00:12:49.036 { 00:12:49.036 "name": "0da238a2-bcae-4edf-a8f2-47dc8837f140", 00:12:49.036 "aliases": [ 00:12:49.036 "lvs/lvol" 00:12:49.036 ], 00:12:49.036 "product_name": "Logical Volume", 00:12:49.036 "block_size": 4096, 00:12:49.036 "num_blocks": 38912, 00:12:49.036 "uuid": "0da238a2-bcae-4edf-a8f2-47dc8837f140", 00:12:49.036 "assigned_rate_limits": { 00:12:49.036 "rw_ios_per_sec": 0, 00:12:49.036 "rw_mbytes_per_sec": 0, 00:12:49.036 "r_mbytes_per_sec": 0, 00:12:49.036 "w_mbytes_per_sec": 0 00:12:49.036 }, 00:12:49.036 "claimed": false, 00:12:49.036 "zoned": false, 00:12:49.036 "supported_io_types": { 00:12:49.036 "read": true, 00:12:49.036 "write": true, 00:12:49.036 "unmap": true, 00:12:49.036 "flush": false, 00:12:49.036 "reset": true, 00:12:49.036 "nvme_admin": false, 00:12:49.036 "nvme_io": false, 00:12:49.036 "nvme_io_md": false, 00:12:49.036 "write_zeroes": true, 00:12:49.036 "zcopy": false, 00:12:49.036 "get_zone_info": false, 00:12:49.036 "zone_management": false, 00:12:49.036 "zone_append": false, 00:12:49.036 "compare": false, 00:12:49.036 "compare_and_write": false, 00:12:49.036 "abort": false, 00:12:49.036 "seek_hole": true, 00:12:49.036 "seek_data": true, 00:12:49.036 "copy": false, 00:12:49.036 "nvme_iov_md": false 00:12:49.036 }, 00:12:49.036 "driver_specific": { 00:12:49.036 "lvol": { 00:12:49.036 "lvol_store_uuid": "fb166428-f1b5-4f57-a68e-0ed1a7db807c", 00:12:49.036 "base_bdev": "aio_bdev", 00:12:49.036 "thin_provision": false, 00:12:49.036 "num_allocated_clusters": 38, 00:12:49.036 "snapshot": false, 00:12:49.036 "clone": false, 00:12:49.036 "esnap_clone": false 00:12:49.036 } 00:12:49.036 } 00:12:49.036 } 00:12:49.036 ] 00:12:49.036 11:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:12:49.036 11:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb166428-f1b5-4f57-a68e-0ed1a7db807c 00:12:49.036 11:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:49.294 11:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:49.294 11:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:49.294 11:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb166428-f1b5-4f57-a68e-0ed1a7db807c 00:12:49.552 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:49.552 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0da238a2-bcae-4edf-a8f2-47dc8837f140 00:12:49.810 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fb166428-f1b5-4f57-a68e-0ed1a7db807c 00:12:50.068 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:50.326 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:50.904 00:12:50.904 real 0m19.103s 00:12:50.904 user 0m18.101s 00:12:50.904 sys 0m2.588s 00:12:50.904 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.904 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:50.904 ************************************ 00:12:50.904 END TEST lvs_grow_clean 00:12:50.904 ************************************ 00:12:50.904 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:50.904 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:50.904 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.904 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:50.904 ************************************ 00:12:50.904 START TEST lvs_grow_dirty 00:12:50.904 ************************************ 00:12:50.904 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:12:50.905 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:50.905 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:50.905 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:50.905 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:50.905 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:50.905 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:50.905 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:50.905 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:50.905 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:51.176 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:51.176 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:51.433 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4599bdd2-5e79-48e7-a3fc-55488e3d4cd5 00:12:51.433 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4599bdd2-5e79-48e7-a3fc-55488e3d4cd5 00:12:51.433 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:51.691 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:51.691 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:51.691 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4599bdd2-5e79-48e7-a3fc-55488e3d4cd5 lvol 150 00:12:51.949 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8fef5064-96be-4b04-ae5f-313af597fe26 00:12:51.949 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:51.949 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:52.207 [2024-10-07 11:23:47.572209] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:52.207 [2024-10-07 11:23:47.572299] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:52.207 true 00:12:52.207 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4599bdd2-5e79-48e7-a3fc-55488e3d4cd5 00:12:52.207 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:52.464 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:52.464 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:52.722 11:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8fef5064-96be-4b04-ae5f-313af597fe26 00:12:52.980 11:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:12:53.238 [2024-10-07 11:23:48.664838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:53.238 11:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:53.496 11:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63764 00:12:53.496 11:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:53.496 11:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:53.496 11:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63764 /var/tmp/bdevperf.sock 00:12:53.496 11:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 63764 ']' 00:12:53.496 11:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:53.496 11:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:53.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:53.496 11:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:53.496 11:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:53.496 11:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:53.496 [2024-10-07 11:23:49.005174] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:12:53.496 [2024-10-07 11:23:49.005283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63764 ] 00:12:53.754 [2024-10-07 11:23:49.146368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.754 [2024-10-07 11:23:49.271690] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.012 [2024-10-07 11:23:49.328642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:54.578 11:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:54.578 11:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:12:54.578 11:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:55.144 Nvme0n1 00:12:55.144 11:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:55.403 [ 00:12:55.403 { 00:12:55.403 "name": "Nvme0n1", 00:12:55.403 "aliases": [ 00:12:55.403 "8fef5064-96be-4b04-ae5f-313af597fe26" 00:12:55.403 ], 00:12:55.403 "product_name": "NVMe disk", 00:12:55.403 "block_size": 4096, 00:12:55.403 "num_blocks": 38912, 00:12:55.403 "uuid": "8fef5064-96be-4b04-ae5f-313af597fe26", 00:12:55.403 "numa_id": -1, 00:12:55.403 "assigned_rate_limits": { 00:12:55.403 "rw_ios_per_sec": 0, 00:12:55.403 "rw_mbytes_per_sec": 0, 00:12:55.403 "r_mbytes_per_sec": 0, 00:12:55.403 "w_mbytes_per_sec": 0 00:12:55.403 }, 00:12:55.403 "claimed": false, 00:12:55.403 "zoned": false, 00:12:55.403 "supported_io_types": { 00:12:55.403 "read": true, 00:12:55.403 "write": true, 00:12:55.403 "unmap": true, 00:12:55.403 "flush": true, 00:12:55.403 "reset": true, 00:12:55.403 "nvme_admin": true, 00:12:55.403 "nvme_io": true, 00:12:55.403 "nvme_io_md": false, 00:12:55.403 "write_zeroes": true, 00:12:55.403 "zcopy": false, 00:12:55.403 "get_zone_info": false, 00:12:55.403 "zone_management": false, 00:12:55.403 "zone_append": false, 00:12:55.403 "compare": true, 00:12:55.403 "compare_and_write": true, 00:12:55.403 "abort": true, 00:12:55.403 "seek_hole": false, 00:12:55.403 "seek_data": false, 00:12:55.403 "copy": true, 00:12:55.403 "nvme_iov_md": false 00:12:55.403 }, 00:12:55.403 "memory_domains": [ 00:12:55.403 { 00:12:55.403 "dma_device_id": "system", 00:12:55.403 "dma_device_type": 1 00:12:55.403 } 00:12:55.403 ], 00:12:55.403 "driver_specific": { 00:12:55.403 "nvme": [ 00:12:55.403 { 00:12:55.403 "trid": { 00:12:55.403 "trtype": "TCP", 00:12:55.403 "adrfam": "IPv4", 00:12:55.403 "traddr": "10.0.0.3", 00:12:55.403 "trsvcid": "4420", 00:12:55.403 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:55.403 }, 00:12:55.403 "ctrlr_data": { 00:12:55.403 "cntlid": 1, 00:12:55.403 "vendor_id": "0x8086", 00:12:55.403 "model_number": "SPDK bdev Controller", 00:12:55.403 "serial_number": "SPDK0", 00:12:55.403 "firmware_revision": "25.01", 00:12:55.403 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:55.403 "oacs": { 00:12:55.403 "security": 0, 00:12:55.403 "format": 0, 00:12:55.403 "firmware": 0, 00:12:55.403 "ns_manage": 0 00:12:55.403 }, 00:12:55.403 "multi_ctrlr": true, 00:12:55.403 "ana_reporting": false 00:12:55.403 }, 00:12:55.403 "vs": { 00:12:55.403 "nvme_version": "1.3" 00:12:55.403 }, 00:12:55.403 "ns_data": { 00:12:55.403 "id": 1, 00:12:55.403 "can_share": true 00:12:55.403 } 00:12:55.403 } 00:12:55.403 ], 00:12:55.403 "mp_policy": "active_passive" 00:12:55.403 } 00:12:55.403 } 00:12:55.403 ] 00:12:55.403 11:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63793 00:12:55.403 11:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:55.403 11:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:55.403 Running I/O for 10 seconds... 00:12:56.775 Latency(us) 00:12:56.775 [2024-10-07T11:23:52.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:56.775 Nvme0n1 : 1.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:12:56.775 [2024-10-07T11:23:52.298Z] =================================================================================================================== 00:12:56.775 [2024-10-07T11:23:52.298Z] Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:12:56.775 00:12:57.342 11:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4599bdd2-5e79-48e7-a3fc-55488e3d4cd5 00:12:57.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:57.600 Nvme0n1 : 2.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:12:57.600 [2024-10-07T11:23:53.123Z] =================================================================================================================== 00:12:57.600 [2024-10-07T11:23:53.124Z] Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:12:57.601 00:12:57.859 true 00:12:57.859 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4599bdd2-5e79-48e7-a3fc-55488e3d4cd5 00:12:57.859 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:58.118 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:58.118 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:58.118 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63793 00:12:58.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:58.686 Nvme0n1 : 3.00 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:12:58.686 [2024-10-07T11:23:54.209Z] =================================================================================================================== 00:12:58.686 [2024-10-07T11:23:54.209Z] Total : 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:12:58.686 00:12:59.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:59.620 Nvme0n1 : 4.00 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:12:59.620 [2024-10-07T11:23:55.143Z] =================================================================================================================== 00:12:59.620 [2024-10-07T11:23:55.143Z] Total : 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:12:59.620 00:13:00.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:00.554 Nvme0n1 : 5.00 7186.20 28.07 0.00 0.00 0.00 0.00 0.00 00:13:00.554 [2024-10-07T11:23:56.077Z] =================================================================================================================== 00:13:00.554 [2024-10-07T11:23:56.077Z] Total : 7186.20 28.07 0.00 0.00 0.00 0.00 0.00 00:13:00.554 00:13:01.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:01.503 Nvme0n1 : 6.00 7173.83 28.02 0.00 0.00 0.00 0.00 0.00 00:13:01.503 [2024-10-07T11:23:57.026Z] =================================================================================================================== 00:13:01.503 [2024-10-07T11:23:57.026Z] Total : 7173.83 28.02 0.00 0.00 0.00 0.00 0.00 00:13:01.503 00:13:02.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:02.437 Nvme0n1 : 7.00 7165.00 27.99 0.00 0.00 0.00 0.00 0.00 00:13:02.437 [2024-10-07T11:23:57.960Z] =================================================================================================================== 00:13:02.437 [2024-10-07T11:23:57.960Z] Total : 7165.00 27.99 0.00 0.00 0.00 0.00 0.00 00:13:02.437 00:13:03.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:03.388 Nvme0n1 : 8.00 7142.50 27.90 0.00 0.00 0.00 0.00 0.00 00:13:03.388 [2024-10-07T11:23:58.911Z] =================================================================================================================== 00:13:03.388 [2024-10-07T11:23:58.911Z] Total : 7142.50 27.90 0.00 0.00 0.00 0.00 0.00 00:13:03.388 00:13:04.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:04.764 Nvme0n1 : 9.00 7125.00 27.83 0.00 0.00 0.00 0.00 0.00 00:13:04.764 [2024-10-07T11:24:00.287Z] =================================================================================================================== 00:13:04.764 [2024-10-07T11:24:00.287Z] Total : 7125.00 27.83 0.00 0.00 0.00 0.00 0.00 00:13:04.764 00:13:05.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:05.700 Nvme0n1 : 10.00 7111.00 27.78 0.00 0.00 0.00 0.00 0.00 00:13:05.700 [2024-10-07T11:24:01.223Z] =================================================================================================================== 00:13:05.700 [2024-10-07T11:24:01.223Z] Total : 7111.00 27.78 0.00 0.00 0.00 0.00 0.00 00:13:05.700 00:13:05.700 00:13:05.700 Latency(us) 00:13:05.700 [2024-10-07T11:24:01.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:05.700 Nvme0n1 : 10.01 7117.42 27.80 0.00 0.00 17977.89 10366.60 108193.98 00:13:05.700 [2024-10-07T11:24:01.223Z] =================================================================================================================== 00:13:05.700 [2024-10-07T11:24:01.223Z] Total : 7117.42 27.80 0.00 0.00 17977.89 10366.60 108193.98 00:13:05.700 { 00:13:05.700 "results": [ 00:13:05.700 { 00:13:05.700 "job": "Nvme0n1", 00:13:05.700 "core_mask": "0x2", 00:13:05.700 "workload": "randwrite", 00:13:05.700 "status": "finished", 00:13:05.700 "queue_depth": 128, 00:13:05.700 "io_size": 4096, 00:13:05.700 "runtime": 10.00897, 00:13:05.700 "iops": 7117.415678136711, 00:13:05.700 "mibps": 27.802404992721527, 00:13:05.700 "io_failed": 0, 00:13:05.700 "io_timeout": 0, 00:13:05.700 "avg_latency_us": 17977.893246556356, 00:13:05.700 "min_latency_us": 10366.603636363636, 00:13:05.700 "max_latency_us": 108193.97818181818 00:13:05.700 } 00:13:05.700 ], 00:13:05.700 "core_count": 1 00:13:05.700 } 00:13:05.700 11:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63764 00:13:05.700 11:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 63764 ']' 00:13:05.700 11:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 63764 00:13:05.700 11:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:13:05.700 11:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:05.700 11:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63764 00:13:05.700 11:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:05.700 11:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:05.700 killing process with pid 63764 00:13:05.700 11:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63764' 00:13:05.700 11:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 63764 00:13:05.700 Received shutdown signal, test time was about 10.000000 seconds 00:13:05.700 00:13:05.700 Latency(us) 00:13:05.700 [2024-10-07T11:24:01.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.700 [2024-10-07T11:24:01.223Z] =================================================================================================================== 00:13:05.700 [2024-10-07T11:24:01.223Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:05.700 11:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 63764 00:13:05.700 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:06.267 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:06.525 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:06.525 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4599bdd2-5e79-48e7-a3fc-55488e3d4cd5 00:13:06.783 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:06.783 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:06.783 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63392 00:13:06.783 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63392 00:13:06.783 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63392 Killed "${NVMF_APP[@]}" "$@" 00:13:06.783 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:06.783 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:06.783 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:06.783 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:06.783 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:06.783 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=63926 00:13:06.783 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:06.783 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 63926 00:13:06.783 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 63926 ']' 00:13:06.783 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.783 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:06.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.783 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.783 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:06.783 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:06.783 [2024-10-07 11:24:02.202998] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:13:06.783 [2024-10-07 11:24:02.203123] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.041 [2024-10-07 11:24:02.347586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.041 [2024-10-07 11:24:02.464388] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.041 [2024-10-07 11:24:02.464492] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.041 [2024-10-07 11:24:02.464520] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.041 [2024-10-07 11:24:02.464528] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.041 [2024-10-07 11:24:02.464535] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.041 [2024-10-07 11:24:02.464941] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.041 [2024-10-07 11:24:02.521517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:08.001 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:08.001 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:13:08.001 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:08.001 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:08.001 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:08.001 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.001 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:08.301 [2024-10-07 11:24:03.587478] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:08.301 [2024-10-07 11:24:03.587785] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:08.301 [2024-10-07 11:24:03.588028] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:08.301 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:08.301 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8fef5064-96be-4b04-ae5f-313af597fe26 00:13:08.301 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=8fef5064-96be-4b04-ae5f-313af597fe26 00:13:08.301 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:08.301 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:13:08.301 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:08.301 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:08.301 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:08.559 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8fef5064-96be-4b04-ae5f-313af597fe26 -t 2000 00:13:08.818 [ 00:13:08.818 { 00:13:08.818 "name": "8fef5064-96be-4b04-ae5f-313af597fe26", 00:13:08.818 "aliases": [ 00:13:08.818 "lvs/lvol" 00:13:08.818 ], 00:13:08.818 "product_name": "Logical Volume", 00:13:08.818 "block_size": 4096, 00:13:08.818 "num_blocks": 38912, 00:13:08.818 "uuid": "8fef5064-96be-4b04-ae5f-313af597fe26", 00:13:08.818 "assigned_rate_limits": { 00:13:08.818 "rw_ios_per_sec": 0, 00:13:08.818 "rw_mbytes_per_sec": 0, 00:13:08.818 "r_mbytes_per_sec": 0, 00:13:08.818 "w_mbytes_per_sec": 0 00:13:08.818 }, 00:13:08.818 "claimed": false, 00:13:08.818 "zoned": false, 00:13:08.818 "supported_io_types": { 00:13:08.818 "read": true, 00:13:08.818 "write": true, 00:13:08.818 "unmap": true, 00:13:08.818 "flush": false, 00:13:08.818 "reset": true, 00:13:08.818 "nvme_admin": false, 00:13:08.818 "nvme_io": false, 00:13:08.818 "nvme_io_md": false, 00:13:08.818 "write_zeroes": true, 00:13:08.818 "zcopy": false, 00:13:08.818 "get_zone_info": false, 00:13:08.818 "zone_management": false, 00:13:08.818 "zone_append": false, 00:13:08.818 "compare": false, 00:13:08.818 "compare_and_write": false, 00:13:08.818 "abort": false, 00:13:08.818 "seek_hole": true, 00:13:08.818 "seek_data": true, 00:13:08.818 "copy": false, 00:13:08.818 "nvme_iov_md": false 00:13:08.818 }, 00:13:08.818 "driver_specific": { 00:13:08.818 "lvol": { 00:13:08.818 "lvol_store_uuid": "4599bdd2-5e79-48e7-a3fc-55488e3d4cd5", 00:13:08.818 "base_bdev": "aio_bdev", 00:13:08.818 "thin_provision": false, 00:13:08.818 "num_allocated_clusters": 38, 00:13:08.818 "snapshot": false, 00:13:08.818 "clone": false, 00:13:08.818 "esnap_clone": false 00:13:08.818 } 00:13:08.818 } 00:13:08.818 } 00:13:08.818 ] 00:13:08.818 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:13:08.818 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4599bdd2-5e79-48e7-a3fc-55488e3d4cd5 00:13:08.818 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:09.076 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:09.076 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4599bdd2-5e79-48e7-a3fc-55488e3d4cd5 00:13:09.076 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:09.334 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:09.334 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:09.591 [2024-10-07 11:24:05.077090] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:09.591 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4599bdd2-5e79-48e7-a3fc-55488e3d4cd5 00:13:09.591 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:13:09.591 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4599bdd2-5e79-48e7-a3fc-55488e3d4cd5 00:13:09.591 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:09.867 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:09.867 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:09.867 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:09.867 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:09.867 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:09.867 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:09.867 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:09.867 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4599bdd2-5e79-48e7-a3fc-55488e3d4cd5 00:13:10.125 request: 00:13:10.125 { 00:13:10.125 "uuid": "4599bdd2-5e79-48e7-a3fc-55488e3d4cd5", 00:13:10.125 "method": "bdev_lvol_get_lvstores", 00:13:10.125 "req_id": 1 00:13:10.125 } 00:13:10.125 Got JSON-RPC error response 00:13:10.125 response: 00:13:10.125 { 00:13:10.125 "code": -19, 00:13:10.125 "message": "No such device" 00:13:10.125 } 00:13:10.125 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:13:10.125 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:10.125 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:10.125 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:10.125 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:10.384 aio_bdev 00:13:10.384 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8fef5064-96be-4b04-ae5f-313af597fe26 00:13:10.384 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=8fef5064-96be-4b04-ae5f-313af597fe26 00:13:10.384 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:10.384 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:13:10.384 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:10.384 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:10.384 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:10.672 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8fef5064-96be-4b04-ae5f-313af597fe26 -t 2000 00:13:10.936 [ 00:13:10.936 { 00:13:10.936 "name": "8fef5064-96be-4b04-ae5f-313af597fe26", 00:13:10.936 "aliases": [ 00:13:10.936 "lvs/lvol" 00:13:10.936 ], 00:13:10.936 "product_name": "Logical Volume", 00:13:10.936 "block_size": 4096, 00:13:10.936 "num_blocks": 38912, 00:13:10.936 "uuid": "8fef5064-96be-4b04-ae5f-313af597fe26", 00:13:10.936 "assigned_rate_limits": { 00:13:10.936 "rw_ios_per_sec": 0, 00:13:10.936 "rw_mbytes_per_sec": 0, 00:13:10.936 "r_mbytes_per_sec": 0, 00:13:10.936 "w_mbytes_per_sec": 0 00:13:10.936 }, 00:13:10.936 "claimed": false, 00:13:10.936 "zoned": false, 00:13:10.936 "supported_io_types": { 00:13:10.936 "read": true, 00:13:10.936 "write": true, 00:13:10.936 "unmap": true, 00:13:10.936 "flush": false, 00:13:10.936 "reset": true, 00:13:10.936 "nvme_admin": false, 00:13:10.936 "nvme_io": false, 00:13:10.936 "nvme_io_md": false, 00:13:10.936 "write_zeroes": true, 00:13:10.936 "zcopy": false, 00:13:10.936 "get_zone_info": false, 00:13:10.936 "zone_management": false, 00:13:10.936 "zone_append": false, 00:13:10.936 "compare": false, 00:13:10.936 "compare_and_write": false, 00:13:10.936 "abort": false, 00:13:10.936 "seek_hole": true, 00:13:10.936 "seek_data": true, 00:13:10.936 "copy": false, 00:13:10.936 "nvme_iov_md": false 00:13:10.936 }, 00:13:10.936 "driver_specific": { 00:13:10.936 "lvol": { 00:13:10.936 "lvol_store_uuid": "4599bdd2-5e79-48e7-a3fc-55488e3d4cd5", 00:13:10.936 "base_bdev": "aio_bdev", 00:13:10.936 "thin_provision": false, 00:13:10.936 "num_allocated_clusters": 38, 00:13:10.936 "snapshot": false, 00:13:10.936 "clone": false, 00:13:10.936 "esnap_clone": false 00:13:10.936 } 00:13:10.936 } 00:13:10.936 } 00:13:10.936 ] 00:13:10.936 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:13:10.936 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4599bdd2-5e79-48e7-a3fc-55488e3d4cd5 00:13:10.936 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:11.194 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:11.194 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:11.194 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4599bdd2-5e79-48e7-a3fc-55488e3d4cd5 00:13:11.519 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:11.519 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8fef5064-96be-4b04-ae5f-313af597fe26 00:13:11.777 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4599bdd2-5e79-48e7-a3fc-55488e3d4cd5 00:13:12.036 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:12.295 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:12.862 ************************************ 00:13:12.862 END TEST lvs_grow_dirty 00:13:12.862 ************************************ 00:13:12.862 00:13:12.862 real 0m21.955s 00:13:12.862 user 0m46.537s 00:13:12.862 sys 0m7.878s 00:13:12.862 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:12.862 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:12.862 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:12.862 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:13:12.862 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:13:12.862 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:13:12.862 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:12.862 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:13:12.862 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:13:12.862 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:13:12.862 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:12.862 nvmf_trace.0 00:13:12.862 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:13:12.862 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:12.862 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:12.862 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:13:13.120 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:13.120 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:13:13.120 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:13.120 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:13.120 rmmod nvme_tcp 00:13:13.120 rmmod nvme_fabrics 00:13:13.120 rmmod nvme_keyring 00:13:13.120 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:13.120 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:13:13.120 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:13:13.120 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 63926 ']' 00:13:13.120 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 63926 00:13:13.120 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 63926 ']' 00:13:13.121 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 63926 00:13:13.121 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:13:13.121 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:13.121 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63926 00:13:13.121 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:13.121 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:13.121 killing process with pid 63926 00:13:13.121 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63926' 00:13:13.121 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 63926 00:13:13.121 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 63926 00:13:13.379 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:13.379 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:13.379 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:13.379 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:13:13.379 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:13:13.379 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:13.379 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:13:13.379 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:13.379 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:13.379 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:13.379 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:13.379 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:13.379 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:13.379 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:13.379 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:13.379 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:13.379 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:13.379 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:13.637 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:13.637 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:13.637 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:13.637 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:13.637 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:13.637 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.637 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.637 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.637 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:13:13.637 00:13:13.637 real 0m44.057s 00:13:13.637 user 1m12.093s 00:13:13.637 sys 0m11.268s 00:13:13.637 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.637 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:13.637 ************************************ 00:13:13.637 END TEST nvmf_lvs_grow 00:13:13.637 ************************************ 00:13:13.637 11:24:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:13.637 11:24:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:13.637 11:24:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.637 11:24:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:13.637 ************************************ 00:13:13.637 START TEST nvmf_bdev_io_wait 00:13:13.637 ************************************ 00:13:13.637 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:13.896 * Looking for test storage... 00:13:13.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:13.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.896 --rc genhtml_branch_coverage=1 00:13:13.896 --rc genhtml_function_coverage=1 00:13:13.896 --rc genhtml_legend=1 00:13:13.896 --rc geninfo_all_blocks=1 00:13:13.896 --rc geninfo_unexecuted_blocks=1 00:13:13.896 00:13:13.896 ' 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:13.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.896 --rc genhtml_branch_coverage=1 00:13:13.896 --rc genhtml_function_coverage=1 00:13:13.896 --rc genhtml_legend=1 00:13:13.896 --rc geninfo_all_blocks=1 00:13:13.896 --rc geninfo_unexecuted_blocks=1 00:13:13.896 00:13:13.896 ' 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:13.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.896 --rc genhtml_branch_coverage=1 00:13:13.896 --rc genhtml_function_coverage=1 00:13:13.896 --rc genhtml_legend=1 00:13:13.896 --rc geninfo_all_blocks=1 00:13:13.896 --rc geninfo_unexecuted_blocks=1 00:13:13.896 00:13:13.896 ' 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:13.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.896 --rc genhtml_branch_coverage=1 00:13:13.896 --rc genhtml_function_coverage=1 00:13:13.896 --rc genhtml_legend=1 00:13:13.896 --rc geninfo_all_blocks=1 00:13:13.896 --rc geninfo_unexecuted_blocks=1 00:13:13.896 00:13:13.896 ' 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.896 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:13.897 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # nvmf_veth_init 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:13.897 Cannot find device "nvmf_init_br" 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:13.897 Cannot find device "nvmf_init_br2" 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:13.897 Cannot find device "nvmf_tgt_br" 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:13.897 Cannot find device "nvmf_tgt_br2" 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:13.897 Cannot find device "nvmf_init_br" 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:13.897 Cannot find device "nvmf_init_br2" 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:13.897 Cannot find device "nvmf_tgt_br" 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:13.897 Cannot find device "nvmf_tgt_br2" 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:13.897 Cannot find device "nvmf_br" 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:13.897 Cannot find device "nvmf_init_if" 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:13:13.897 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:14.156 Cannot find device "nvmf_init_if2" 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:14.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:14.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:14.156 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:14.415 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:14.415 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:13:14.415 00:13:14.415 --- 10.0.0.3 ping statistics --- 00:13:14.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.415 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:13:14.415 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:14.415 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:14.415 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:13:14.415 00:13:14.415 --- 10.0.0.4 ping statistics --- 00:13:14.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.415 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:14.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:13:14.416 00:13:14.416 --- 10.0.0.1 ping statistics --- 00:13:14.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.416 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:14.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:13:14.416 00:13:14.416 --- 10.0.0.2 ping statistics --- 00:13:14.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.416 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # return 0 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=64300 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 64300 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 64300 ']' 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:14.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:14.416 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:14.416 [2024-10-07 11:24:09.776369] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:13:14.416 [2024-10-07 11:24:09.776464] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.416 [2024-10-07 11:24:09.915865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.674 [2024-10-07 11:24:10.045080] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.674 [2024-10-07 11:24:10.045147] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.674 [2024-10-07 11:24:10.045162] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.674 [2024-10-07 11:24:10.045173] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.674 [2024-10-07 11:24:10.045182] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.674 [2024-10-07 11:24:10.047101] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.674 [2024-10-07 11:24:10.047309] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.674 [2024-10-07 11:24:10.047207] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.674 [2024-10-07 11:24:10.047309] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.611 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:15.611 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:13:15.611 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:15.611 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:15.611 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.611 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.611 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:15.611 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.611 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.611 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.611 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:15.611 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.611 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.611 [2024-10-07 11:24:10.985499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:15.611 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.611 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:15.611 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.611 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.611 [2024-10-07 11:24:10.997703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.611 Malloc0 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.611 [2024-10-07 11:24:11.061708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64346 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64348 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:13:15.611 { 00:13:15.611 "params": { 00:13:15.611 "name": "Nvme$subsystem", 00:13:15.611 "trtype": "$TEST_TRANSPORT", 00:13:15.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:15.611 "adrfam": "ipv4", 00:13:15.611 "trsvcid": "$NVMF_PORT", 00:13:15.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:15.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:15.611 "hdgst": ${hdgst:-false}, 00:13:15.611 "ddgst": ${ddgst:-false} 00:13:15.611 }, 00:13:15.611 "method": "bdev_nvme_attach_controller" 00:13:15.611 } 00:13:15.611 EOF 00:13:15.611 )") 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64349 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:13:15.611 { 00:13:15.611 "params": { 00:13:15.611 "name": "Nvme$subsystem", 00:13:15.611 "trtype": "$TEST_TRANSPORT", 00:13:15.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:15.611 "adrfam": "ipv4", 00:13:15.611 "trsvcid": "$NVMF_PORT", 00:13:15.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:15.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:15.611 "hdgst": ${hdgst:-false}, 00:13:15.611 "ddgst": ${ddgst:-false} 00:13:15.611 }, 00:13:15.611 "method": "bdev_nvme_attach_controller" 00:13:15.611 } 00:13:15.611 EOF 00:13:15.611 )") 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64353 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:13:15.611 { 00:13:15.611 "params": { 00:13:15.611 "name": "Nvme$subsystem", 00:13:15.611 "trtype": "$TEST_TRANSPORT", 00:13:15.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:15.611 "adrfam": "ipv4", 00:13:15.611 "trsvcid": "$NVMF_PORT", 00:13:15.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:15.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:15.611 "hdgst": ${hdgst:-false}, 00:13:15.611 "ddgst": ${ddgst:-false} 00:13:15.611 }, 00:13:15.611 "method": "bdev_nvme_attach_controller" 00:13:15.611 } 00:13:15.611 EOF 00:13:15.611 )") 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:15.611 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:13:15.612 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:13:15.612 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:13:15.612 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:13:15.612 "params": { 00:13:15.612 "name": "Nvme1", 00:13:15.612 "trtype": "tcp", 00:13:15.612 "traddr": "10.0.0.3", 00:13:15.612 "adrfam": "ipv4", 00:13:15.612 "trsvcid": "4420", 00:13:15.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:15.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:15.612 "hdgst": false, 00:13:15.612 "ddgst": false 00:13:15.612 }, 00:13:15.612 "method": "bdev_nvme_attach_controller" 00:13:15.612 }' 00:13:15.612 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:13:15.612 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:13:15.612 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:13:15.612 { 00:13:15.612 "params": { 00:13:15.612 "name": "Nvme$subsystem", 00:13:15.612 "trtype": "$TEST_TRANSPORT", 00:13:15.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:15.612 "adrfam": "ipv4", 00:13:15.612 "trsvcid": "$NVMF_PORT", 00:13:15.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:15.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:15.612 "hdgst": ${hdgst:-false}, 00:13:15.612 "ddgst": ${ddgst:-false} 00:13:15.612 }, 00:13:15.612 "method": "bdev_nvme_attach_controller" 00:13:15.612 } 00:13:15.612 EOF 00:13:15.612 )") 00:13:15.612 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:13:15.612 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:13:15.612 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:13:15.612 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:13:15.612 "params": { 00:13:15.612 "name": "Nvme1", 00:13:15.612 "trtype": "tcp", 00:13:15.612 "traddr": "10.0.0.3", 00:13:15.612 "adrfam": "ipv4", 00:13:15.612 "trsvcid": "4420", 00:13:15.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:15.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:15.612 "hdgst": false, 00:13:15.612 "ddgst": false 00:13:15.612 }, 00:13:15.612 "method": "bdev_nvme_attach_controller" 00:13:15.612 }' 00:13:15.612 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:13:15.612 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:13:15.612 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:13:15.612 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:13:15.612 "params": { 00:13:15.612 "name": "Nvme1", 00:13:15.612 "trtype": "tcp", 00:13:15.612 "traddr": "10.0.0.3", 00:13:15.612 "adrfam": "ipv4", 00:13:15.612 "trsvcid": "4420", 00:13:15.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:15.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:15.612 "hdgst": false, 00:13:15.612 "ddgst": false 00:13:15.612 }, 00:13:15.612 "method": "bdev_nvme_attach_controller" 00:13:15.612 }' 00:13:15.612 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:13:15.612 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:13:15.612 "params": { 00:13:15.612 "name": "Nvme1", 00:13:15.612 "trtype": "tcp", 00:13:15.612 "traddr": "10.0.0.3", 00:13:15.612 "adrfam": "ipv4", 00:13:15.612 "trsvcid": "4420", 00:13:15.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:15.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:15.612 "hdgst": false, 00:13:15.612 "ddgst": false 00:13:15.612 }, 00:13:15.612 "method": "bdev_nvme_attach_controller" 00:13:15.612 }' 00:13:15.612 [2024-10-07 11:24:11.131430] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:13:15.612 [2024-10-07 11:24:11.131529] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:15.873 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64346 00:13:15.873 [2024-10-07 11:24:11.149429] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:13:15.873 [2024-10-07 11:24:11.149509] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:15.873 [2024-10-07 11:24:11.149871] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:13:15.873 [2024-10-07 11:24:11.149935] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:15.873 [2024-10-07 11:24:11.150057] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:13:15.873 [2024-10-07 11:24:11.150132] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:15.873 [2024-10-07 11:24:11.343843] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.138 [2024-10-07 11:24:11.412036] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.138 [2024-10-07 11:24:11.444076] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:13:16.138 [2024-10-07 11:24:11.494781] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.138 [2024-10-07 11:24:11.495357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:16.138 [2024-10-07 11:24:11.517888] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:13:16.138 [2024-10-07 11:24:11.568069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:16.138 [2024-10-07 11:24:11.576185] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.138 [2024-10-07 11:24:11.596269] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:13:16.138 Running I/O for 1 seconds... 00:13:16.138 [2024-10-07 11:24:11.645537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:16.396 [2024-10-07 11:24:11.692886] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:13:16.396 Running I/O for 1 seconds... 00:13:16.396 [2024-10-07 11:24:11.745730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:16.396 Running I/O for 1 seconds... 00:13:16.396 Running I/O for 1 seconds... 00:13:17.331 6479.00 IOPS, 25.31 MiB/s 00:13:17.331 Latency(us) 00:13:17.331 [2024-10-07T11:24:12.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.331 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:17.331 Nvme1n1 : 1.03 6450.68 25.20 0.00 0.00 19546.43 5272.67 48615.80 00:13:17.331 [2024-10-07T11:24:12.854Z] =================================================================================================================== 00:13:17.331 [2024-10-07T11:24:12.854Z] Total : 6450.68 25.20 0.00 0.00 19546.43 5272.67 48615.80 00:13:17.331 7896.00 IOPS, 30.84 MiB/s 00:13:17.331 Latency(us) 00:13:17.331 [2024-10-07T11:24:12.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.332 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:17.332 Nvme1n1 : 1.02 7966.99 31.12 0.00 0.00 15984.74 8579.26 27763.43 00:13:17.332 [2024-10-07T11:24:12.855Z] =================================================================================================================== 00:13:17.332 [2024-10-07T11:24:12.855Z] Total : 7966.99 31.12 0.00 0.00 15984.74 8579.26 27763.43 00:13:17.332 167848.00 IOPS, 655.66 MiB/s 00:13:17.332 Latency(us) 00:13:17.332 [2024-10-07T11:24:12.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.332 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:17.332 Nvme1n1 : 1.00 167507.42 654.33 0.00 0.00 760.22 392.84 2025.66 00:13:17.332 [2024-10-07T11:24:12.855Z] =================================================================================================================== 00:13:17.332 [2024-10-07T11:24:12.855Z] Total : 167507.42 654.33 0.00 0.00 760.22 392.84 2025.66 00:13:17.591 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64348 00:13:17.591 5905.00 IOPS, 23.07 MiB/s 00:13:17.591 Latency(us) 00:13:17.591 [2024-10-07T11:24:13.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.591 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:17.591 Nvme1n1 : 1.01 5981.66 23.37 0.00 0.00 21298.49 7923.90 44087.85 00:13:17.591 [2024-10-07T11:24:13.114Z] =================================================================================================================== 00:13:17.591 [2024-10-07T11:24:13.114Z] Total : 5981.66 23.37 0.00 0.00 21298.49 7923.90 44087.85 00:13:17.591 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64349 00:13:17.591 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64353 00:13:17.591 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.591 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.591 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:17.591 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.591 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:17.591 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:17.591 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:17.591 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:13:17.849 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:17.850 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:13:17.850 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:17.850 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:17.850 rmmod nvme_tcp 00:13:17.850 rmmod nvme_fabrics 00:13:17.850 rmmod nvme_keyring 00:13:17.850 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:17.850 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:13:17.850 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:13:17.850 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 64300 ']' 00:13:17.850 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 64300 00:13:17.850 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 64300 ']' 00:13:17.850 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 64300 00:13:17.850 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:13:17.850 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:17.850 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64300 00:13:17.850 killing process with pid 64300 00:13:17.850 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:17.850 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:17.850 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64300' 00:13:17.850 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 64300 00:13:17.850 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 64300 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:18.109 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:13:18.419 00:13:18.419 real 0m4.623s 00:13:18.419 user 0m18.859s 00:13:18.419 sys 0m2.454s 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:18.419 ************************************ 00:13:18.419 END TEST nvmf_bdev_io_wait 00:13:18.419 ************************************ 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:18.419 ************************************ 00:13:18.419 START TEST nvmf_queue_depth 00:13:18.419 ************************************ 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:18.419 * Looking for test storage... 00:13:18.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:18.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.419 --rc genhtml_branch_coverage=1 00:13:18.419 --rc genhtml_function_coverage=1 00:13:18.419 --rc genhtml_legend=1 00:13:18.419 --rc geninfo_all_blocks=1 00:13:18.419 --rc geninfo_unexecuted_blocks=1 00:13:18.419 00:13:18.419 ' 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:18.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.419 --rc genhtml_branch_coverage=1 00:13:18.419 --rc genhtml_function_coverage=1 00:13:18.419 --rc genhtml_legend=1 00:13:18.419 --rc geninfo_all_blocks=1 00:13:18.419 --rc geninfo_unexecuted_blocks=1 00:13:18.419 00:13:18.419 ' 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:18.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.419 --rc genhtml_branch_coverage=1 00:13:18.419 --rc genhtml_function_coverage=1 00:13:18.419 --rc genhtml_legend=1 00:13:18.419 --rc geninfo_all_blocks=1 00:13:18.419 --rc geninfo_unexecuted_blocks=1 00:13:18.419 00:13:18.419 ' 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:18.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.419 --rc genhtml_branch_coverage=1 00:13:18.419 --rc genhtml_function_coverage=1 00:13:18.419 --rc genhtml_legend=1 00:13:18.419 --rc geninfo_all_blocks=1 00:13:18.419 --rc geninfo_unexecuted_blocks=1 00:13:18.419 00:13:18.419 ' 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.419 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:18.683 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # nvmf_veth_init 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:18.683 Cannot find device "nvmf_init_br" 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:18.683 Cannot find device "nvmf_init_br2" 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:13:18.683 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:18.683 Cannot find device "nvmf_tgt_br" 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:18.683 Cannot find device "nvmf_tgt_br2" 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:18.683 Cannot find device "nvmf_init_br" 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:18.683 Cannot find device "nvmf_init_br2" 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:18.683 Cannot find device "nvmf_tgt_br" 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:18.683 Cannot find device "nvmf_tgt_br2" 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:18.683 Cannot find device "nvmf_br" 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:18.683 Cannot find device "nvmf_init_if" 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:18.683 Cannot find device "nvmf_init_if2" 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:18.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:13:18.683 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:18.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:18.684 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:13:18.684 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:18.684 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:18.684 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:18.684 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:18.684 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:18.684 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:18.684 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:18.684 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:18.684 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:18.946 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:18.946 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:18.946 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:18.947 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:18.947 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:13:18.947 00:13:18.947 --- 10.0.0.3 ping statistics --- 00:13:18.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.947 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:18.947 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:18.947 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:13:18.947 00:13:18.947 --- 10.0.0.4 ping statistics --- 00:13:18.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.947 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:18.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:18.947 00:13:18.947 --- 10.0.0.1 ping statistics --- 00:13:18.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.947 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:18.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:13:18.947 00:13:18.947 --- 10.0.0.2 ping statistics --- 00:13:18.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.947 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # return 0 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:18.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=64641 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 64641 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 64641 ']' 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:18.947 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:18.947 [2024-10-07 11:24:14.461342] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:13:18.947 [2024-10-07 11:24:14.461659] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.217 [2024-10-07 11:24:14.608631] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.217 [2024-10-07 11:24:14.736886] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.217 [2024-10-07 11:24:14.736956] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.217 [2024-10-07 11:24:14.736982] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.217 [2024-10-07 11:24:14.736993] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.217 [2024-10-07 11:24:14.737002] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.217 [2024-10-07 11:24:14.737485] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.476 [2024-10-07 11:24:14.798842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:20.043 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.043 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:13:20.043 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:20.043 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:20.043 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:20.043 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.043 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:20.043 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.043 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:20.043 [2024-10-07 11:24:15.546710] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.043 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.043 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:20.043 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.043 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:20.302 Malloc0 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:20.302 [2024-10-07 11:24:15.611696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:20.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64673 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64673 /var/tmp/bdevperf.sock 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 64673 ']' 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.302 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:20.302 [2024-10-07 11:24:15.671159] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:13:20.302 [2024-10-07 11:24:15.671544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64673 ] 00:13:20.303 [2024-10-07 11:24:15.810314] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.561 [2024-10-07 11:24:15.936983] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.561 [2024-10-07 11:24:15.994893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:21.530 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.530 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:13:21.530 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:21.530 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.530 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:21.530 NVMe0n1 00:13:21.530 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.530 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:21.530 Running I/O for 10 seconds... 00:13:23.451 6345.00 IOPS, 24.79 MiB/s [2024-10-07T11:24:20.352Z] 7172.00 IOPS, 28.02 MiB/s [2024-10-07T11:24:20.940Z] 7386.00 IOPS, 28.85 MiB/s [2024-10-07T11:24:22.316Z] 7513.50 IOPS, 29.35 MiB/s [2024-10-07T11:24:23.250Z] 7596.20 IOPS, 29.67 MiB/s [2024-10-07T11:24:24.189Z] 7634.67 IOPS, 29.82 MiB/s [2024-10-07T11:24:25.146Z] 7632.71 IOPS, 29.82 MiB/s [2024-10-07T11:24:26.099Z] 7690.50 IOPS, 30.04 MiB/s [2024-10-07T11:24:27.035Z] 7748.33 IOPS, 30.27 MiB/s [2024-10-07T11:24:27.035Z] 7801.20 IOPS, 30.47 MiB/s 00:13:31.512 Latency(us) 00:13:31.512 [2024-10-07T11:24:27.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.512 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:31.512 Verification LBA range: start 0x0 length 0x4000 00:13:31.512 NVMe0n1 : 10.10 7826.26 30.57 0.00 0.00 130231.49 29193.31 97708.22 00:13:31.512 [2024-10-07T11:24:27.035Z] =================================================================================================================== 00:13:31.512 [2024-10-07T11:24:27.035Z] Total : 7826.26 30.57 0.00 0.00 130231.49 29193.31 97708.22 00:13:31.512 { 00:13:31.512 "results": [ 00:13:31.512 { 00:13:31.512 "job": "NVMe0n1", 00:13:31.512 "core_mask": "0x1", 00:13:31.512 "workload": "verify", 00:13:31.512 "status": "finished", 00:13:31.512 "verify_range": { 00:13:31.512 "start": 0, 00:13:31.512 "length": 16384 00:13:31.512 }, 00:13:31.512 "queue_depth": 1024, 00:13:31.512 "io_size": 4096, 00:13:31.512 "runtime": 10.097927, 00:13:31.512 "iops": 7826.259785795639, 00:13:31.512 "mibps": 30.571327288264214, 00:13:31.512 "io_failed": 0, 00:13:31.512 "io_timeout": 0, 00:13:31.512 "avg_latency_us": 130231.48769572505, 00:13:31.512 "min_latency_us": 29193.30909090909, 00:13:31.512 "max_latency_us": 97708.21818181819 00:13:31.512 } 00:13:31.512 ], 00:13:31.512 "core_count": 1 00:13:31.512 } 00:13:31.770 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64673 00:13:31.770 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 64673 ']' 00:13:31.770 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 64673 00:13:31.770 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:13:31.770 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:31.770 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64673 00:13:31.770 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:31.770 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:31.770 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64673' 00:13:31.770 killing process with pid 64673 00:13:31.770 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 64673 00:13:31.770 Received shutdown signal, test time was about 10.000000 seconds 00:13:31.770 00:13:31.770 Latency(us) 00:13:31.770 [2024-10-07T11:24:27.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.770 [2024-10-07T11:24:27.293Z] =================================================================================================================== 00:13:31.770 [2024-10-07T11:24:27.293Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:31.770 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 64673 00:13:31.770 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:31.770 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:31.770 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:31.770 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:32.029 rmmod nvme_tcp 00:13:32.029 rmmod nvme_fabrics 00:13:32.029 rmmod nvme_keyring 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 64641 ']' 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 64641 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 64641 ']' 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 64641 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64641 00:13:32.029 killing process with pid 64641 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64641' 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 64641 00:13:32.029 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 64641 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:32.288 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:32.545 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:32.546 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:32.546 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:32.546 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:32.546 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.546 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.546 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.546 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:13:32.546 00:13:32.546 real 0m14.156s 00:13:32.546 user 0m24.106s 00:13:32.546 sys 0m2.317s 00:13:32.546 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.546 ************************************ 00:13:32.546 END TEST nvmf_queue_depth 00:13:32.546 ************************************ 00:13:32.546 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.546 11:24:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:32.546 11:24:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:32.546 11:24:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.546 11:24:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:32.546 ************************************ 00:13:32.546 START TEST nvmf_target_multipath 00:13:32.546 ************************************ 00:13:32.546 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:32.546 * Looking for test storage... 00:13:32.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:32.546 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:32.546 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:13:32.546 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:32.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.805 --rc genhtml_branch_coverage=1 00:13:32.805 --rc genhtml_function_coverage=1 00:13:32.805 --rc genhtml_legend=1 00:13:32.805 --rc geninfo_all_blocks=1 00:13:32.805 --rc geninfo_unexecuted_blocks=1 00:13:32.805 00:13:32.805 ' 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:32.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.805 --rc genhtml_branch_coverage=1 00:13:32.805 --rc genhtml_function_coverage=1 00:13:32.805 --rc genhtml_legend=1 00:13:32.805 --rc geninfo_all_blocks=1 00:13:32.805 --rc geninfo_unexecuted_blocks=1 00:13:32.805 00:13:32.805 ' 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:32.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.805 --rc genhtml_branch_coverage=1 00:13:32.805 --rc genhtml_function_coverage=1 00:13:32.805 --rc genhtml_legend=1 00:13:32.805 --rc geninfo_all_blocks=1 00:13:32.805 --rc geninfo_unexecuted_blocks=1 00:13:32.805 00:13:32.805 ' 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:32.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.805 --rc genhtml_branch_coverage=1 00:13:32.805 --rc genhtml_function_coverage=1 00:13:32.805 --rc genhtml_legend=1 00:13:32.805 --rc geninfo_all_blocks=1 00:13:32.805 --rc geninfo_unexecuted_blocks=1 00:13:32.805 00:13:32.805 ' 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.805 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:32.806 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # nvmf_veth_init 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:32.806 Cannot find device "nvmf_init_br" 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:32.806 Cannot find device "nvmf_init_br2" 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:32.806 Cannot find device "nvmf_tgt_br" 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:32.806 Cannot find device "nvmf_tgt_br2" 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:32.806 Cannot find device "nvmf_init_br" 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:32.806 Cannot find device "nvmf_init_br2" 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:32.806 Cannot find device "nvmf_tgt_br" 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:32.806 Cannot find device "nvmf_tgt_br2" 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:32.806 Cannot find device "nvmf_br" 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:13:32.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:33.063 Cannot find device "nvmf_init_if" 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:33.063 Cannot find device "nvmf_init_if2" 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:33.063 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:33.063 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:33.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:33.064 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:33.064 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:13:33.064 00:13:33.064 --- 10.0.0.3 ping statistics --- 00:13:33.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.064 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:33.064 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:33.064 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:13:33.064 00:13:33.064 --- 10.0.0.4 ping statistics --- 00:13:33.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.064 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:33.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:13:33.064 00:13:33.064 --- 10.0.0.1 ping statistics --- 00:13:33.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.064 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:33.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:13:33.064 00:13:33.064 --- 10.0.0.2 ping statistics --- 00:13:33.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.064 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # return 0 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:33.064 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:33.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.321 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # nvmfpid=65051 00:13:33.321 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:33.321 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # waitforlisten 65051 00:13:33.321 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 65051 ']' 00:13:33.321 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.321 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:33.321 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.321 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:33.321 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:33.321 [2024-10-07 11:24:28.645597] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:13:33.321 [2024-10-07 11:24:28.645873] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.321 [2024-10-07 11:24:28.788187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.580 [2024-10-07 11:24:28.902358] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.580 [2024-10-07 11:24:28.902668] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.580 [2024-10-07 11:24:28.902860] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.580 [2024-10-07 11:24:28.903041] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.580 [2024-10-07 11:24:28.903276] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.580 [2024-10-07 11:24:28.904950] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.580 [2024-10-07 11:24:28.905052] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.580 [2024-10-07 11:24:28.905122] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.580 [2024-10-07 11:24:28.905126] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.580 [2024-10-07 11:24:28.963537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:33.580 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.580 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:13:33.580 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:33.580 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:33.580 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:33.580 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.580 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:34.146 [2024-10-07 11:24:29.412919] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.146 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:13:34.404 Malloc0 00:13:34.404 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:13:34.662 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:35.230 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:35.488 [2024-10-07 11:24:30.789500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:35.488 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:13:35.747 [2024-10-07 11:24:31.113824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:13:35.747 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid=8f4e03b1-7080-439e-b116-202a2cecf6a1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:13:36.005 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid=8f4e03b1-7080-439e-b116-202a2cecf6a1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:13:36.005 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:13:36.005 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:13:36.005 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.005 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:36.005 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:13:37.905 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:37.905 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:37.905 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65139 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:13:38.163 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:38.163 [global] 00:13:38.163 thread=1 00:13:38.163 invalidate=1 00:13:38.163 rw=randrw 00:13:38.163 time_based=1 00:13:38.163 runtime=6 00:13:38.163 ioengine=libaio 00:13:38.163 direct=1 00:13:38.163 bs=4096 00:13:38.163 iodepth=128 00:13:38.163 norandommap=0 00:13:38.163 numjobs=1 00:13:38.163 00:13:38.163 verify_dump=1 00:13:38.163 verify_backlog=512 00:13:38.163 verify_state_save=0 00:13:38.163 do_verify=1 00:13:38.163 verify=crc32c-intel 00:13:38.163 [job0] 00:13:38.163 filename=/dev/nvme0n1 00:13:38.163 Could not set queue depth (nvme0n1) 00:13:38.163 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:38.163 fio-3.35 00:13:38.163 Starting 1 thread 00:13:39.098 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:13:39.356 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:13:39.615 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:13:39.615 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:39.615 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:39.615 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:39.615 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:39.615 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:39.615 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:13:39.615 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:39.615 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:39.615 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:39.615 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:39.615 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:39.615 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:13:39.874 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:13:40.440 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:13:40.440 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:40.440 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:40.440 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:40.440 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:40.440 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:40.440 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:13:40.440 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:40.440 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:40.440 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:40.440 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:40.440 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:40.440 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65139 00:13:44.713 00:13:44.713 job0: (groupid=0, jobs=1): err= 0: pid=65165: Mon Oct 7 11:24:39 2024 00:13:44.713 read: IOPS=10.5k, BW=40.8MiB/s (42.8MB/s)(245MiB/6006msec) 00:13:44.713 slat (usec): min=3, max=6424, avg=55.27, stdev=217.36 00:13:44.713 clat (usec): min=1176, max=14437, avg=8322.93, stdev=1421.71 00:13:44.713 lat (usec): min=1197, max=14446, avg=8378.20, stdev=1426.67 00:13:44.713 clat percentiles (usec): 00:13:44.713 | 1.00th=[ 4424], 5.00th=[ 6456], 10.00th=[ 7177], 20.00th=[ 7635], 00:13:44.713 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[ 8356], 00:13:44.713 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[11863], 00:13:44.713 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13566], 99.95th=[13698], 00:13:44.713 | 99.99th=[14091] 00:13:44.713 bw ( KiB/s): min= 6424, max=26592, per=52.31%, avg=21879.27, stdev=6155.31, samples=11 00:13:44.713 iops : min= 1606, max= 6648, avg=5469.82, stdev=1538.83, samples=11 00:13:44.713 write: IOPS=6158, BW=24.1MiB/s (25.2MB/s)(129MiB/5378msec); 0 zone resets 00:13:44.713 slat (usec): min=12, max=1705, avg=66.05, stdev=156.47 00:13:44.713 clat (usec): min=388, max=14125, avg=7223.30, stdev=1254.09 00:13:44.713 lat (usec): min=615, max=14155, avg=7289.35, stdev=1258.40 00:13:44.713 clat percentiles (usec): 00:13:44.713 | 1.00th=[ 3359], 5.00th=[ 4359], 10.00th=[ 5669], 20.00th=[ 6783], 00:13:44.713 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7570], 00:13:44.713 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8225], 95.00th=[ 8455], 00:13:44.713 | 99.00th=[11207], 99.50th=[11731], 99.90th=[12780], 99.95th=[13173], 00:13:44.713 | 99.99th=[13829] 00:13:44.713 bw ( KiB/s): min= 6896, max=26496, per=88.94%, avg=21912.00, stdev=5931.36, samples=11 00:13:44.713 iops : min= 1724, max= 6624, avg=5478.00, stdev=1482.84, samples=11 00:13:44.713 lat (usec) : 500=0.01%, 1000=0.01% 00:13:44.713 lat (msec) : 2=0.01%, 4=1.44%, 10=92.95%, 20=5.59% 00:13:44.713 cpu : usr=5.73%, sys=22.38%, ctx=5682, majf=0, minf=90 00:13:44.713 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:44.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:44.713 issued rwts: total=62796,33121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:44.713 00:13:44.713 Run status group 0 (all jobs): 00:13:44.713 READ: bw=40.8MiB/s (42.8MB/s), 40.8MiB/s-40.8MiB/s (42.8MB/s-42.8MB/s), io=245MiB (257MB), run=6006-6006msec 00:13:44.713 WRITE: bw=24.1MiB/s (25.2MB/s), 24.1MiB/s-24.1MiB/s (25.2MB/s-25.2MB/s), io=129MiB (136MB), run=5378-5378msec 00:13:44.713 00:13:44.713 Disk stats (read/write): 00:13:44.713 nvme0n1: ios=62080/32240, merge=0/0, ticks=495453/217974, in_queue=713427, util=98.66% 00:13:44.713 11:24:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:13:44.713 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:13:44.971 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:13:44.971 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:44.971 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:44.971 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:44.971 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:44.971 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:44.971 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:13:44.971 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:44.971 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:44.971 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:44.971 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:44.971 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:44.971 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:13:44.971 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65241 00:13:44.971 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:44.971 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:13:44.971 [global] 00:13:44.971 thread=1 00:13:44.971 invalidate=1 00:13:44.971 rw=randrw 00:13:44.971 time_based=1 00:13:44.971 runtime=6 00:13:44.971 ioengine=libaio 00:13:44.971 direct=1 00:13:44.971 bs=4096 00:13:44.971 iodepth=128 00:13:44.971 norandommap=0 00:13:44.971 numjobs=1 00:13:44.971 00:13:44.971 verify_dump=1 00:13:44.971 verify_backlog=512 00:13:44.971 verify_state_save=0 00:13:44.971 do_verify=1 00:13:44.971 verify=crc32c-intel 00:13:44.971 [job0] 00:13:44.971 filename=/dev/nvme0n1 00:13:44.971 Could not set queue depth (nvme0n1) 00:13:45.230 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:45.230 fio-3.35 00:13:45.230 Starting 1 thread 00:13:46.164 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:13:46.422 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:13:46.680 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:13:46.680 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:46.680 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:46.680 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:46.680 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:46.680 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:46.680 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:13:46.680 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:46.680 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:46.680 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:46.680 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:46.680 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:46.680 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:13:46.939 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:13:47.198 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:13:47.198 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:47.198 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:47.198 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:47.198 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:47.198 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:47.198 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:13:47.198 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:47.198 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:47.198 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:47.198 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:47.198 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:47.198 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65241 00:13:51.410 00:13:51.410 job0: (groupid=0, jobs=1): err= 0: pid=65268: Mon Oct 7 11:24:46 2024 00:13:51.410 read: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(272MiB/6005msec) 00:13:51.410 slat (usec): min=2, max=7073, avg=42.01, stdev=192.02 00:13:51.410 clat (usec): min=1143, max=17130, avg=7529.90, stdev=1893.51 00:13:51.410 lat (usec): min=1161, max=17375, avg=7571.91, stdev=1908.75 00:13:51.410 clat percentiles (usec): 00:13:51.410 | 1.00th=[ 2868], 5.00th=[ 4293], 10.00th=[ 4948], 20.00th=[ 5866], 00:13:51.410 | 30.00th=[ 6915], 40.00th=[ 7570], 50.00th=[ 7832], 60.00th=[ 8094], 00:13:51.410 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[11207], 00:13:51.410 | 99.00th=[12780], 99.50th=[13042], 99.90th=[14091], 99.95th=[14484], 00:13:51.410 | 99.99th=[16909] 00:13:51.410 bw ( KiB/s): min=12552, max=39193, per=53.46%, avg=24839.00, stdev=7979.18, samples=11 00:13:51.410 iops : min= 3138, max= 9798, avg=6209.64, stdev=1994.76, samples=11 00:13:51.410 write: IOPS=6862, BW=26.8MiB/s (28.1MB/s)(145MiB/5408msec); 0 zone resets 00:13:51.410 slat (usec): min=4, max=8757, avg=52.95, stdev=143.21 00:13:51.410 clat (usec): min=819, max=16921, avg=6398.23, stdev=1784.45 00:13:51.410 lat (usec): min=861, max=16943, avg=6451.18, stdev=1800.08 00:13:51.410 clat percentiles (usec): 00:13:51.410 | 1.00th=[ 2671], 5.00th=[ 3425], 10.00th=[ 3851], 20.00th=[ 4490], 00:13:51.410 | 30.00th=[ 5211], 40.00th=[ 6521], 50.00th=[ 6980], 60.00th=[ 7308], 00:13:51.410 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[ 8094], 95.00th=[ 8356], 00:13:51.410 | 99.00th=[11076], 99.50th=[11994], 99.90th=[15270], 99.95th=[16319], 00:13:51.410 | 99.99th=[16712] 00:13:51.410 bw ( KiB/s): min=13120, max=38315, per=90.47%, avg=24832.09, stdev=7789.16, samples=11 00:13:51.410 iops : min= 3280, max= 9578, avg=6207.91, stdev=1947.16, samples=11 00:13:51.410 lat (usec) : 1000=0.01% 00:13:51.410 lat (msec) : 2=0.23%, 4=6.40%, 10=89.02%, 20=4.35% 00:13:51.410 cpu : usr=6.20%, sys=22.65%, ctx=6240, majf=0, minf=90 00:13:51.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:13:51.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.410 issued rwts: total=69756,37111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.410 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.410 00:13:51.410 Run status group 0 (all jobs): 00:13:51.410 READ: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=272MiB (286MB), run=6005-6005msec 00:13:51.410 WRITE: bw=26.8MiB/s (28.1MB/s), 26.8MiB/s-26.8MiB/s (28.1MB/s-28.1MB/s), io=145MiB (152MB), run=5408-5408msec 00:13:51.410 00:13:51.410 Disk stats (read/write): 00:13:51.410 nvme0n1: ios=68840/36518, merge=0/0, ticks=494795/217235, in_queue=712030, util=98.63% 00:13:51.410 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:51.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:51.410 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:51.410 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:13:51.410 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:51.410 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.410 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:51.410 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.410 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:13:51.411 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.669 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:13:51.669 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:51.928 rmmod nvme_tcp 00:13:51.928 rmmod nvme_fabrics 00:13:51.928 rmmod nvme_keyring 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n 65051 ']' 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # killprocess 65051 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 65051 ']' 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 65051 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65051 00:13:51.928 killing process with pid 65051 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65051' 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 65051 00:13:51.928 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 65051 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:52.188 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:52.447 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:52.447 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:52.447 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.447 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.447 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.447 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:13:52.447 ************************************ 00:13:52.447 END TEST nvmf_target_multipath 00:13:52.447 ************************************ 00:13:52.447 00:13:52.447 real 0m19.801s 00:13:52.447 user 1m14.208s 00:13:52.447 sys 0m9.518s 00:13:52.447 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:52.447 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:52.447 11:24:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:52.447 11:24:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:52.447 11:24:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:52.447 11:24:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:52.447 ************************************ 00:13:52.447 START TEST nvmf_zcopy 00:13:52.447 ************************************ 00:13:52.447 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:52.447 * Looking for test storage... 00:13:52.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:52.447 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:52.447 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:13:52.447 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:52.706 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:52.706 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.706 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.706 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.706 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.706 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.706 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.706 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.706 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.706 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.706 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.706 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.706 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:13:52.706 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:13:52.706 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.706 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.706 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:13:52.706 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:13:52.706 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.706 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:13:52.706 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.706 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:13:52.706 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:13:52.706 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.706 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:13:52.706 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.706 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:52.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.707 --rc genhtml_branch_coverage=1 00:13:52.707 --rc genhtml_function_coverage=1 00:13:52.707 --rc genhtml_legend=1 00:13:52.707 --rc geninfo_all_blocks=1 00:13:52.707 --rc geninfo_unexecuted_blocks=1 00:13:52.707 00:13:52.707 ' 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:52.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.707 --rc genhtml_branch_coverage=1 00:13:52.707 --rc genhtml_function_coverage=1 00:13:52.707 --rc genhtml_legend=1 00:13:52.707 --rc geninfo_all_blocks=1 00:13:52.707 --rc geninfo_unexecuted_blocks=1 00:13:52.707 00:13:52.707 ' 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:52.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.707 --rc genhtml_branch_coverage=1 00:13:52.707 --rc genhtml_function_coverage=1 00:13:52.707 --rc genhtml_legend=1 00:13:52.707 --rc geninfo_all_blocks=1 00:13:52.707 --rc geninfo_unexecuted_blocks=1 00:13:52.707 00:13:52.707 ' 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:52.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.707 --rc genhtml_branch_coverage=1 00:13:52.707 --rc genhtml_function_coverage=1 00:13:52.707 --rc genhtml_legend=1 00:13:52.707 --rc geninfo_all_blocks=1 00:13:52.707 --rc geninfo_unexecuted_blocks=1 00:13:52.707 00:13:52.707 ' 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.707 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # nvmf_veth_init 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:52.707 Cannot find device "nvmf_init_br" 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:52.707 Cannot find device "nvmf_init_br2" 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:52.707 Cannot find device "nvmf_tgt_br" 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:52.707 Cannot find device "nvmf_tgt_br2" 00:13:52.707 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:52.708 Cannot find device "nvmf_init_br" 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:52.708 Cannot find device "nvmf_init_br2" 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:52.708 Cannot find device "nvmf_tgt_br" 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:52.708 Cannot find device "nvmf_tgt_br2" 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:52.708 Cannot find device "nvmf_br" 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:52.708 Cannot find device "nvmf_init_if" 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:52.708 Cannot find device "nvmf_init_if2" 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:52.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:52.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:52.708 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:52.968 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:52.968 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:13:52.968 00:13:52.968 --- 10.0.0.3 ping statistics --- 00:13:52.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.968 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:52.968 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:52.968 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:13:52.968 00:13:52.968 --- 10.0.0.4 ping statistics --- 00:13:52.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.968 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:52.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:13:52.968 00:13:52.968 --- 10.0.0.1 ping statistics --- 00:13:52.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.968 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:52.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:13:52.968 00:13:52.968 --- 10.0.0.2 ping statistics --- 00:13:52.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.968 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # return 0 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:52.968 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:53.226 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:53.226 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:53.226 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:53.226 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:53.226 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=65564 00:13:53.226 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:53.226 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 65564 00:13:53.226 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 65564 ']' 00:13:53.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.226 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.226 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:53.226 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.226 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:53.226 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:53.226 [2024-10-07 11:24:48.561108] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:13:53.226 [2024-10-07 11:24:48.561418] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.226 [2024-10-07 11:24:48.701288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.485 [2024-10-07 11:24:48.826665] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.485 [2024-10-07 11:24:48.826726] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.485 [2024-10-07 11:24:48.826741] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.485 [2024-10-07 11:24:48.826752] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.485 [2024-10-07 11:24:48.826761] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.485 [2024-10-07 11:24:48.827223] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.485 [2024-10-07 11:24:48.883265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:54.420 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:54.420 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:13:54.420 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:54.420 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:54.420 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:54.420 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.420 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:54.420 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:54.420 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.420 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:54.420 [2024-10-07 11:24:49.644878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:54.421 [2024-10-07 11:24:49.660952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:54.421 malloc0 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:13:54.421 { 00:13:54.421 "params": { 00:13:54.421 "name": "Nvme$subsystem", 00:13:54.421 "trtype": "$TEST_TRANSPORT", 00:13:54.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:54.421 "adrfam": "ipv4", 00:13:54.421 "trsvcid": "$NVMF_PORT", 00:13:54.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:54.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:54.421 "hdgst": ${hdgst:-false}, 00:13:54.421 "ddgst": ${ddgst:-false} 00:13:54.421 }, 00:13:54.421 "method": "bdev_nvme_attach_controller" 00:13:54.421 } 00:13:54.421 EOF 00:13:54.421 )") 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:13:54.421 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:13:54.421 "params": { 00:13:54.421 "name": "Nvme1", 00:13:54.421 "trtype": "tcp", 00:13:54.421 "traddr": "10.0.0.3", 00:13:54.421 "adrfam": "ipv4", 00:13:54.421 "trsvcid": "4420", 00:13:54.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:54.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:54.421 "hdgst": false, 00:13:54.421 "ddgst": false 00:13:54.421 }, 00:13:54.421 "method": "bdev_nvme_attach_controller" 00:13:54.421 }' 00:13:54.421 [2024-10-07 11:24:49.765268] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:13:54.421 [2024-10-07 11:24:49.765414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65603 ] 00:13:54.421 [2024-10-07 11:24:49.906703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.679 [2024-10-07 11:24:50.031065] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.679 [2024-10-07 11:24:50.096620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:54.937 Running I/O for 10 seconds... 00:13:56.847 5702.00 IOPS, 44.55 MiB/s [2024-10-07T11:24:53.305Z] 5688.00 IOPS, 44.44 MiB/s [2024-10-07T11:24:54.236Z] 5692.67 IOPS, 44.47 MiB/s [2024-10-07T11:24:55.611Z] 5709.25 IOPS, 44.60 MiB/s [2024-10-07T11:24:56.545Z] 5732.40 IOPS, 44.78 MiB/s [2024-10-07T11:24:57.481Z] 5699.00 IOPS, 44.52 MiB/s [2024-10-07T11:24:58.415Z] 5610.29 IOPS, 43.83 MiB/s [2024-10-07T11:24:59.359Z] 5615.00 IOPS, 43.87 MiB/s [2024-10-07T11:25:00.293Z] 5615.22 IOPS, 43.87 MiB/s [2024-10-07T11:25:00.293Z] 5614.80 IOPS, 43.87 MiB/s 00:14:04.770 Latency(us) 00:14:04.770 [2024-10-07T11:25:00.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.770 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:04.770 Verification LBA range: start 0x0 length 0x1000 00:14:04.770 Nvme1n1 : 10.02 5618.12 43.89 0.00 0.00 22714.18 2889.54 31695.59 00:14:04.770 [2024-10-07T11:25:00.293Z] =================================================================================================================== 00:14:04.770 [2024-10-07T11:25:00.293Z] Total : 5618.12 43.89 0.00 0.00 22714.18 2889.54 31695.59 00:14:05.029 11:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65720 00:14:05.029 11:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:05.029 11:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:05.029 11:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:05.029 11:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:14:05.029 11:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:14:05.029 11:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:05.029 11:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:14:05.029 11:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:14:05.029 { 00:14:05.029 "params": { 00:14:05.029 "name": "Nvme$subsystem", 00:14:05.029 "trtype": "$TEST_TRANSPORT", 00:14:05.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:05.029 "adrfam": "ipv4", 00:14:05.029 "trsvcid": "$NVMF_PORT", 00:14:05.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:05.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:05.029 "hdgst": ${hdgst:-false}, 00:14:05.029 "ddgst": ${ddgst:-false} 00:14:05.029 }, 00:14:05.029 "method": "bdev_nvme_attach_controller" 00:14:05.029 } 00:14:05.029 EOF 00:14:05.029 )") 00:14:05.029 11:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:14:05.029 [2024-10-07 11:25:00.475571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.029 [2024-10-07 11:25:00.475821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.029 11:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:14:05.029 11:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:14:05.029 11:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:14:05.029 "params": { 00:14:05.029 "name": "Nvme1", 00:14:05.029 "trtype": "tcp", 00:14:05.029 "traddr": "10.0.0.3", 00:14:05.029 "adrfam": "ipv4", 00:14:05.029 "trsvcid": "4420", 00:14:05.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:05.029 "hdgst": false, 00:14:05.029 "ddgst": false 00:14:05.029 }, 00:14:05.029 "method": "bdev_nvme_attach_controller" 00:14:05.029 }' 00:14:05.029 [2024-10-07 11:25:00.487528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.029 [2024-10-07 11:25:00.487569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.029 [2024-10-07 11:25:00.499537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.029 [2024-10-07 11:25:00.499587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.029 [2024-10-07 11:25:00.511544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.029 [2024-10-07 11:25:00.511594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.029 [2024-10-07 11:25:00.517203] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:14:05.029 [2024-10-07 11:25:00.517292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65720 ] 00:14:05.029 [2024-10-07 11:25:00.523542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.029 [2024-10-07 11:25:00.523733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.029 [2024-10-07 11:25:00.535547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.029 [2024-10-07 11:25:00.535776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.029 [2024-10-07 11:25:00.547567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.029 [2024-10-07 11:25:00.547830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.559563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.559806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.571559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.571767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.583563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.583797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.595556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.595752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.607552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.607722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.619558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.619758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.631577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.631797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.643587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.643824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.655191] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.319 [2024-10-07 11:25:00.655598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.655733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.667621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.667916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.679628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.679922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.691643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.691717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.703636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.703704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.715618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.715684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.727647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.727729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.739630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.739694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.751653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.751716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.763669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.763744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.775654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.775728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.787670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.787745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.799647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.799709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.811644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.811705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.823673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.823736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.319 [2024-10-07 11:25:00.828097] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.319 [2024-10-07 11:25:00.835633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.319 [2024-10-07 11:25:00.835680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:00.847662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:00.847726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:00.859675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:00.859744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:00.871674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:00.871748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:00.883696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:00.883777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:00.895682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:00.895751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:00.907685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:00.907756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:00.914989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:05.577 [2024-10-07 11:25:00.919678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:00.919735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:00.931742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:00.931822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:00.943729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:00.943814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:00.955715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:00.955793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:00.967699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:00.967766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:00.975947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:00.976002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:00.983959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:00.984015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:00.991985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:00.992052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:01.004040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:01.004119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:01.016075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:01.016187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:01.024024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:01.024087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:01.031997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:01.032045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:01.040025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:01.040071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 Running I/O for 5 seconds... 00:14:05.577 [2024-10-07 11:25:01.052040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:01.052084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:01.071310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:01.071408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.577 [2024-10-07 11:25:01.087081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.577 [2024-10-07 11:25:01.087409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.834 [2024-10-07 11:25:01.104959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.834 [2024-10-07 11:25:01.105040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.834 [2024-10-07 11:25:01.120265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.834 [2024-10-07 11:25:01.120359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.834 [2024-10-07 11:25:01.129874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.834 [2024-10-07 11:25:01.129933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.834 [2024-10-07 11:25:01.146390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.834 [2024-10-07 11:25:01.146464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.834 [2024-10-07 11:25:01.162064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.834 [2024-10-07 11:25:01.162407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.834 [2024-10-07 11:25:01.180991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.834 [2024-10-07 11:25:01.181069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.834 [2024-10-07 11:25:01.195968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.834 [2024-10-07 11:25:01.196028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.834 [2024-10-07 11:25:01.212403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.834 [2024-10-07 11:25:01.212489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.834 [2024-10-07 11:25:01.229289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.834 [2024-10-07 11:25:01.229379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.834 [2024-10-07 11:25:01.245984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.834 [2024-10-07 11:25:01.246058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.834 [2024-10-07 11:25:01.262095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.834 [2024-10-07 11:25:01.262156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.834 [2024-10-07 11:25:01.279006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.834 [2024-10-07 11:25:01.279085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.835 [2024-10-07 11:25:01.295045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.835 [2024-10-07 11:25:01.295155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.835 [2024-10-07 11:25:01.305877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.835 [2024-10-07 11:25:01.305958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.835 [2024-10-07 11:25:01.321105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.835 [2024-10-07 11:25:01.321460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.835 [2024-10-07 11:25:01.336286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.835 [2024-10-07 11:25:01.336372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.835 [2024-10-07 11:25:01.352655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.835 [2024-10-07 11:25:01.352723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.093 [2024-10-07 11:25:01.370755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.093 [2024-10-07 11:25:01.370826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.093 [2024-10-07 11:25:01.384496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.093 [2024-10-07 11:25:01.384554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.093 [2024-10-07 11:25:01.400800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.093 [2024-10-07 11:25:01.400874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.093 [2024-10-07 11:25:01.418019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.093 [2024-10-07 11:25:01.418104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.093 [2024-10-07 11:25:01.434130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.093 [2024-10-07 11:25:01.434212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.093 [2024-10-07 11:25:01.451615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.093 [2024-10-07 11:25:01.451938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.093 [2024-10-07 11:25:01.467274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.093 [2024-10-07 11:25:01.467617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.093 [2024-10-07 11:25:01.485866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.093 [2024-10-07 11:25:01.485951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.093 [2024-10-07 11:25:01.501662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.093 [2024-10-07 11:25:01.501743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.093 [2024-10-07 11:25:01.517863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.093 [2024-10-07 11:25:01.517945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.093 [2024-10-07 11:25:01.535003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.093 [2024-10-07 11:25:01.535422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.093 [2024-10-07 11:25:01.552694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.093 [2024-10-07 11:25:01.552779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.093 [2024-10-07 11:25:01.568981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.093 [2024-10-07 11:25:01.569067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.093 [2024-10-07 11:25:01.585926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.093 [2024-10-07 11:25:01.586282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.093 [2024-10-07 11:25:01.602954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.093 [2024-10-07 11:25:01.603329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.352 [2024-10-07 11:25:01.619185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.352 [2024-10-07 11:25:01.619627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.352 [2024-10-07 11:25:01.629668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.352 [2024-10-07 11:25:01.630026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.352 [2024-10-07 11:25:01.642796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.352 [2024-10-07 11:25:01.643125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.352 [2024-10-07 11:25:01.658765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.352 [2024-10-07 11:25:01.659144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.352 [2024-10-07 11:25:01.675722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.352 [2024-10-07 11:25:01.676083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.352 [2024-10-07 11:25:01.691758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.352 [2024-10-07 11:25:01.692113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.352 [2024-10-07 11:25:01.711061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.352 [2024-10-07 11:25:01.711383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.352 [2024-10-07 11:25:01.726754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.352 [2024-10-07 11:25:01.727043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.352 [2024-10-07 11:25:01.744304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.352 [2024-10-07 11:25:01.744392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.352 [2024-10-07 11:25:01.760513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.352 [2024-10-07 11:25:01.760588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.352 [2024-10-07 11:25:01.778838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.352 [2024-10-07 11:25:01.778919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.352 [2024-10-07 11:25:01.793973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.352 [2024-10-07 11:25:01.794055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.352 [2024-10-07 11:25:01.806892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.352 [2024-10-07 11:25:01.806982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.352 [2024-10-07 11:25:01.825815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.352 [2024-10-07 11:25:01.825914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.352 [2024-10-07 11:25:01.843333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.352 [2024-10-07 11:25:01.843433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.352 [2024-10-07 11:25:01.859629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.352 [2024-10-07 11:25:01.859952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.352 [2024-10-07 11:25:01.875420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.352 [2024-10-07 11:25:01.875499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.612 [2024-10-07 11:25:01.885935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.612 [2024-10-07 11:25:01.886016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.612 [2024-10-07 11:25:01.900953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.612 [2024-10-07 11:25:01.901036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.612 [2024-10-07 11:25:01.918057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.612 [2024-10-07 11:25:01.918147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.612 [2024-10-07 11:25:01.934607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.612 [2024-10-07 11:25:01.934933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.612 [2024-10-07 11:25:01.950658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.612 [2024-10-07 11:25:01.950731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.612 [2024-10-07 11:25:01.967517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.612 [2024-10-07 11:25:01.967604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.612 [2024-10-07 11:25:01.982788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.612 [2024-10-07 11:25:01.982860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.612 [2024-10-07 11:25:01.992773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.612 [2024-10-07 11:25:01.992839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.612 [2024-10-07 11:25:02.009417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.612 [2024-10-07 11:25:02.009511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.612 [2024-10-07 11:25:02.024536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.612 [2024-10-07 11:25:02.024855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.612 [2024-10-07 11:25:02.040526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.612 [2024-10-07 11:25:02.040596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.612 [2024-10-07 11:25:02.051142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.612 [2024-10-07 11:25:02.051230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.612 10769.00 IOPS, 84.13 MiB/s [2024-10-07T11:25:02.135Z] [2024-10-07 11:25:02.066179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.612 [2024-10-07 11:25:02.066252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.612 [2024-10-07 11:25:02.083718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.612 [2024-10-07 11:25:02.084041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.612 [2024-10-07 11:25:02.099104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.612 [2024-10-07 11:25:02.099451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.612 [2024-10-07 11:25:02.115784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.612 [2024-10-07 11:25:02.115865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.612 [2024-10-07 11:25:02.131510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.612 [2024-10-07 11:25:02.131605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.870 [2024-10-07 11:25:02.141570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.870 [2024-10-07 11:25:02.141642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.870 [2024-10-07 11:25:02.156773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.870 [2024-10-07 11:25:02.156861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.870 [2024-10-07 11:25:02.168464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.870 [2024-10-07 11:25:02.168539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.870 [2024-10-07 11:25:02.183981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.870 [2024-10-07 11:25:02.184056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.870 [2024-10-07 11:25:02.194655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.870 [2024-10-07 11:25:02.194990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.870 [2024-10-07 11:25:02.209686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.870 [2024-10-07 11:25:02.209996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.870 [2024-10-07 11:25:02.225954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.870 [2024-10-07 11:25:02.226237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.870 [2024-10-07 11:25:02.235883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.870 [2024-10-07 11:25:02.236119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.870 [2024-10-07 11:25:02.252887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.870 [2024-10-07 11:25:02.253167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.870 [2024-10-07 11:25:02.268629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.870 [2024-10-07 11:25:02.268892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.870 [2024-10-07 11:25:02.279215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.870 [2024-10-07 11:25:02.279476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.870 [2024-10-07 11:25:02.294642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.870 [2024-10-07 11:25:02.294915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.870 [2024-10-07 11:25:02.310438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.870 [2024-10-07 11:25:02.310727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.870 [2024-10-07 11:25:02.322165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.870 [2024-10-07 11:25:02.322449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.870 [2024-10-07 11:25:02.334589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.870 [2024-10-07 11:25:02.334817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.870 [2024-10-07 11:25:02.349892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.870 [2024-10-07 11:25:02.350152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.870 [2024-10-07 11:25:02.366199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.870 [2024-10-07 11:25:02.366513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.870 [2024-10-07 11:25:02.382158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.870 [2024-10-07 11:25:02.382592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.128 [2024-10-07 11:25:02.398752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.128 [2024-10-07 11:25:02.399076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.128 [2024-10-07 11:25:02.415331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.128 [2024-10-07 11:25:02.415431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.128 [2024-10-07 11:25:02.433826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.128 [2024-10-07 11:25:02.433908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.128 [2024-10-07 11:25:02.448818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.128 [2024-10-07 11:25:02.448901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.128 [2024-10-07 11:25:02.465091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.128 [2024-10-07 11:25:02.465171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.128 [2024-10-07 11:25:02.482847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.128 [2024-10-07 11:25:02.483185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.128 [2024-10-07 11:25:02.499634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.128 [2024-10-07 11:25:02.499712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.128 [2024-10-07 11:25:02.516174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.128 [2024-10-07 11:25:02.516258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.128 [2024-10-07 11:25:02.533117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.128 [2024-10-07 11:25:02.533194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.128 [2024-10-07 11:25:02.549076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.128 [2024-10-07 11:25:02.549163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.128 [2024-10-07 11:25:02.559410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.128 [2024-10-07 11:25:02.559474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.128 [2024-10-07 11:25:02.575343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.128 [2024-10-07 11:25:02.575414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.128 [2024-10-07 11:25:02.590542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.128 [2024-10-07 11:25:02.590611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.128 [2024-10-07 11:25:02.608915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.128 [2024-10-07 11:25:02.608986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.128 [2024-10-07 11:25:02.624102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.128 [2024-10-07 11:25:02.624196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.128 [2024-10-07 11:25:02.642293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.128 [2024-10-07 11:25:02.642400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.387 [2024-10-07 11:25:02.657801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.387 [2024-10-07 11:25:02.658113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.387 [2024-10-07 11:25:02.668724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.387 [2024-10-07 11:25:02.669029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.387 [2024-10-07 11:25:02.684557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.387 [2024-10-07 11:25:02.684645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.387 [2024-10-07 11:25:02.699482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.387 [2024-10-07 11:25:02.699795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.387 [2024-10-07 11:25:02.715411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.387 [2024-10-07 11:25:02.715486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.387 [2024-10-07 11:25:02.725506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.387 [2024-10-07 11:25:02.725584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.387 [2024-10-07 11:25:02.741917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.387 [2024-10-07 11:25:02.742000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.387 [2024-10-07 11:25:02.756945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.387 [2024-10-07 11:25:02.757028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.387 [2024-10-07 11:25:02.773417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.387 [2024-10-07 11:25:02.773501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.387 [2024-10-07 11:25:02.790274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.387 [2024-10-07 11:25:02.790392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.387 [2024-10-07 11:25:02.806987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.387 [2024-10-07 11:25:02.807074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.387 [2024-10-07 11:25:02.823052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.387 [2024-10-07 11:25:02.823148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.387 [2024-10-07 11:25:02.840310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.387 [2024-10-07 11:25:02.840405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.387 [2024-10-07 11:25:02.856208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.387 [2024-10-07 11:25:02.856294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.387 [2024-10-07 11:25:02.865831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.387 [2024-10-07 11:25:02.866166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.387 [2024-10-07 11:25:02.883028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.387 [2024-10-07 11:25:02.883108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.387 [2024-10-07 11:25:02.898522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.387 [2024-10-07 11:25:02.898601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.387 [2024-10-07 11:25:02.909005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.387 [2024-10-07 11:25:02.909305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.646 [2024-10-07 11:25:02.921479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.646 [2024-10-07 11:25:02.921550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.646 [2024-10-07 11:25:02.937116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.646 [2024-10-07 11:25:02.937433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.646 [2024-10-07 11:25:02.952489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.646 [2024-10-07 11:25:02.952809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.646 [2024-10-07 11:25:02.968411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.646 [2024-10-07 11:25:02.968494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.646 [2024-10-07 11:25:02.978508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.646 [2024-10-07 11:25:02.978594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.646 [2024-10-07 11:25:02.993658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.646 [2024-10-07 11:25:02.993744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.646 [2024-10-07 11:25:03.009150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.646 [2024-10-07 11:25:03.009231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.646 [2024-10-07 11:25:03.018896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.646 [2024-10-07 11:25:03.018989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.646 [2024-10-07 11:25:03.035285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.646 [2024-10-07 11:25:03.035380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.646 [2024-10-07 11:25:03.051141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.646 [2024-10-07 11:25:03.051504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.646 10817.50 IOPS, 84.51 MiB/s [2024-10-07T11:25:03.169Z] [2024-10-07 11:25:03.061623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.646 [2024-10-07 11:25:03.061694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.646 [2024-10-07 11:25:03.076614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.646 [2024-10-07 11:25:03.076695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.646 [2024-10-07 11:25:03.092632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.646 [2024-10-07 11:25:03.092709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.646 [2024-10-07 11:25:03.103358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.646 [2024-10-07 11:25:03.103432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.646 [2024-10-07 11:25:03.119482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.646 [2024-10-07 11:25:03.119572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.646 [2024-10-07 11:25:03.133665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.646 [2024-10-07 11:25:03.133734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.646 [2024-10-07 11:25:03.149640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.646 [2024-10-07 11:25:03.149720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.646 [2024-10-07 11:25:03.166332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.646 [2024-10-07 11:25:03.166412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.906 [2024-10-07 11:25:03.183313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.906 [2024-10-07 11:25:03.183412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.906 [2024-10-07 11:25:03.200289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.906 [2024-10-07 11:25:03.200377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.906 [2024-10-07 11:25:03.216832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.906 [2024-10-07 11:25:03.216916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.906 [2024-10-07 11:25:03.232513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.906 [2024-10-07 11:25:03.232601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.906 [2024-10-07 11:25:03.243653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.906 [2024-10-07 11:25:03.243739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.906 [2024-10-07 11:25:03.259011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.906 [2024-10-07 11:25:03.259092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.906 [2024-10-07 11:25:03.273711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.906 [2024-10-07 11:25:03.274040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.906 [2024-10-07 11:25:03.289791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.906 [2024-10-07 11:25:03.289867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.906 [2024-10-07 11:25:03.308418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.906 [2024-10-07 11:25:03.308499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.906 [2024-10-07 11:25:03.323897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.906 [2024-10-07 11:25:03.324197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.906 [2024-10-07 11:25:03.334335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.906 [2024-10-07 11:25:03.334414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.906 [2024-10-07 11:25:03.349710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.906 [2024-10-07 11:25:03.350012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.906 [2024-10-07 11:25:03.365147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.906 [2024-10-07 11:25:03.365481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.906 [2024-10-07 11:25:03.375367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.906 [2024-10-07 11:25:03.375416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.906 [2024-10-07 11:25:03.390805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.906 [2024-10-07 11:25:03.390877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.906 [2024-10-07 11:25:03.401760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.906 [2024-10-07 11:25:03.401820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.906 [2024-10-07 11:25:03.417433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.906 [2024-10-07 11:25:03.417513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.164 [2024-10-07 11:25:03.433565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.164 [2024-10-07 11:25:03.433630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.164 [2024-10-07 11:25:03.449514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.164 [2024-10-07 11:25:03.449597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.164 [2024-10-07 11:25:03.459480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.164 [2024-10-07 11:25:03.459533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.164 [2024-10-07 11:25:03.474514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.164 [2024-10-07 11:25:03.474577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.164 [2024-10-07 11:25:03.484962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.164 [2024-10-07 11:25:03.485239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.164 [2024-10-07 11:25:03.501076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.164 [2024-10-07 11:25:03.501157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.164 [2024-10-07 11:25:03.516722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.164 [2024-10-07 11:25:03.516797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.164 [2024-10-07 11:25:03.532368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.164 [2024-10-07 11:25:03.532440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.164 [2024-10-07 11:25:03.551094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.164 [2024-10-07 11:25:03.551166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.164 [2024-10-07 11:25:03.566782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.164 [2024-10-07 11:25:03.566863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.164 [2024-10-07 11:25:03.585127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.165 [2024-10-07 11:25:03.585213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.165 [2024-10-07 11:25:03.600643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.165 [2024-10-07 11:25:03.600712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.165 [2024-10-07 11:25:03.618230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.165 [2024-10-07 11:25:03.618356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.165 [2024-10-07 11:25:03.634206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.165 [2024-10-07 11:25:03.634287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.165 [2024-10-07 11:25:03.652790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.165 [2024-10-07 11:25:03.652856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.165 [2024-10-07 11:25:03.668434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.165 [2024-10-07 11:25:03.668505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.165 [2024-10-07 11:25:03.686830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.165 [2024-10-07 11:25:03.686913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.423 [2024-10-07 11:25:03.702516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.423 [2024-10-07 11:25:03.702594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.423 [2024-10-07 11:25:03.719213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.423 [2024-10-07 11:25:03.719295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.423 [2024-10-07 11:25:03.735928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.423 [2024-10-07 11:25:03.736236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.423 [2024-10-07 11:25:03.752459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.423 [2024-10-07 11:25:03.752536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.423 [2024-10-07 11:25:03.768858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.423 [2024-10-07 11:25:03.768937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.423 [2024-10-07 11:25:03.787942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.423 [2024-10-07 11:25:03.788026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.423 [2024-10-07 11:25:03.804153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.423 [2024-10-07 11:25:03.804239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.423 [2024-10-07 11:25:03.820959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.423 [2024-10-07 11:25:03.821037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.423 [2024-10-07 11:25:03.836858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.423 [2024-10-07 11:25:03.836943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.423 [2024-10-07 11:25:03.847047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.423 [2024-10-07 11:25:03.847403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.423 [2024-10-07 11:25:03.862360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.423 [2024-10-07 11:25:03.862443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.423 [2024-10-07 11:25:03.878058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.423 [2024-10-07 11:25:03.878442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.423 [2024-10-07 11:25:03.894769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.423 [2024-10-07 11:25:03.894851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.423 [2024-10-07 11:25:03.912406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.423 [2024-10-07 11:25:03.912490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.424 [2024-10-07 11:25:03.922904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.424 [2024-10-07 11:25:03.922987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.424 [2024-10-07 11:25:03.935719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.424 [2024-10-07 11:25:03.935813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.683 [2024-10-07 11:25:03.950822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.683 [2024-10-07 11:25:03.950891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.683 [2024-10-07 11:25:03.969982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.683 [2024-10-07 11:25:03.970283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.683 [2024-10-07 11:25:03.984360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.683 [2024-10-07 11:25:03.984444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.683 [2024-10-07 11:25:04.000418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.683 [2024-10-07 11:25:04.000495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.683 [2024-10-07 11:25:04.016684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.683 [2024-10-07 11:25:04.016761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.683 [2024-10-07 11:25:04.033335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.683 [2024-10-07 11:25:04.033433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.683 [2024-10-07 11:25:04.050218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.683 [2024-10-07 11:25:04.050339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.683 10845.00 IOPS, 84.73 MiB/s [2024-10-07T11:25:04.206Z] [2024-10-07 11:25:04.065293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.683 [2024-10-07 11:25:04.065390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.683 [2024-10-07 11:25:04.081398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.683 [2024-10-07 11:25:04.081471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.683 [2024-10-07 11:25:04.099924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.683 [2024-10-07 11:25:04.100221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.683 [2024-10-07 11:25:04.115647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.683 [2024-10-07 11:25:04.115723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.683 [2024-10-07 11:25:04.133622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.683 [2024-10-07 11:25:04.133700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.683 [2024-10-07 11:25:04.149617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.683 [2024-10-07 11:25:04.149691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.683 [2024-10-07 11:25:04.166637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.683 [2024-10-07 11:25:04.166935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.683 [2024-10-07 11:25:04.182933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.683 [2024-10-07 11:25:04.182996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.683 [2024-10-07 11:25:04.201336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.683 [2024-10-07 11:25:04.201404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.942 [2024-10-07 11:25:04.216345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.942 [2024-10-07 11:25:04.216407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.942 [2024-10-07 11:25:04.226518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.942 [2024-10-07 11:25:04.226569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.942 [2024-10-07 11:25:04.243272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.942 [2024-10-07 11:25:04.243362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.942 [2024-10-07 11:25:04.258909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.942 [2024-10-07 11:25:04.258987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.942 [2024-10-07 11:25:04.269035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.942 [2024-10-07 11:25:04.269091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.942 [2024-10-07 11:25:04.284244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.942 [2024-10-07 11:25:04.284579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.942 [2024-10-07 11:25:04.295978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.942 [2024-10-07 11:25:04.296305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.942 [2024-10-07 11:25:04.312191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.942 [2024-10-07 11:25:04.312536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.942 [2024-10-07 11:25:04.328473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.942 [2024-10-07 11:25:04.328556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.942 [2024-10-07 11:25:04.346056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.942 [2024-10-07 11:25:04.346132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.942 [2024-10-07 11:25:04.360926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.942 [2024-10-07 11:25:04.361012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.942 [2024-10-07 11:25:04.377690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.942 [2024-10-07 11:25:04.377777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.942 [2024-10-07 11:25:04.392934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.942 [2024-10-07 11:25:04.393023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.942 [2024-10-07 11:25:04.409626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.942 [2024-10-07 11:25:04.409719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.942 [2024-10-07 11:25:04.420361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.943 [2024-10-07 11:25:04.420445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.943 [2024-10-07 11:25:04.436204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.943 [2024-10-07 11:25:04.436609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.943 [2024-10-07 11:25:04.451218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.943 [2024-10-07 11:25:04.451302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.468130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.468478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.484108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.484441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.499991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.500313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.510284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.510405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.525923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.526227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.541369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.541685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.551772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.552077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.567875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.568197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.579744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.580050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.597088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.597433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.613956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.614257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.631148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.631475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.641999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.642292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.655086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.655158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.666687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.666750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.681947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.682017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.692345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.692400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.704688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.704754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.200 [2024-10-07 11:25:04.720048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.200 [2024-10-07 11:25:04.720365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.458 [2024-10-07 11:25:04.736696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.458 [2024-10-07 11:25:04.736779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.458 [2024-10-07 11:25:04.753148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.458 [2024-10-07 11:25:04.753228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.458 [2024-10-07 11:25:04.771534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.458 [2024-10-07 11:25:04.771835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.458 [2024-10-07 11:25:04.783361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.458 [2024-10-07 11:25:04.783421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.459 [2024-10-07 11:25:04.799225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.459 [2024-10-07 11:25:04.799529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.459 [2024-10-07 11:25:04.815665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.459 [2024-10-07 11:25:04.815746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.459 [2024-10-07 11:25:04.832429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.459 [2024-10-07 11:25:04.832500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.459 [2024-10-07 11:25:04.848972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.459 [2024-10-07 11:25:04.849045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.459 [2024-10-07 11:25:04.865783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.459 [2024-10-07 11:25:04.865859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.459 [2024-10-07 11:25:04.884907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.459 [2024-10-07 11:25:04.884989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.459 [2024-10-07 11:25:04.900695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.459 [2024-10-07 11:25:04.900774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.459 [2024-10-07 11:25:04.919309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.459 [2024-10-07 11:25:04.919423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.459 [2024-10-07 11:25:04.933534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.459 [2024-10-07 11:25:04.933619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.459 [2024-10-07 11:25:04.949903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.459 [2024-10-07 11:25:04.949992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.459 [2024-10-07 11:25:04.966772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.459 [2024-10-07 11:25:04.967108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.717 [2024-10-07 11:25:04.983970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.717 [2024-10-07 11:25:04.984261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.717 [2024-10-07 11:25:05.000418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.717 [2024-10-07 11:25:05.000726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.717 [2024-10-07 11:25:05.017893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.717 [2024-10-07 11:25:05.018181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.717 [2024-10-07 11:25:05.034544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.717 [2024-10-07 11:25:05.034894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.717 [2024-10-07 11:25:05.050832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.717 [2024-10-07 11:25:05.051146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.717 10844.00 IOPS, 84.72 MiB/s [2024-10-07T11:25:05.240Z] [2024-10-07 11:25:05.069401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.717 [2024-10-07 11:25:05.069721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.717 [2024-10-07 11:25:05.085049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.717 [2024-10-07 11:25:05.085381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.717 [2024-10-07 11:25:05.102058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.717 [2024-10-07 11:25:05.102414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.717 [2024-10-07 11:25:05.119038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.717 [2024-10-07 11:25:05.119355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.717 [2024-10-07 11:25:05.135558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.717 [2024-10-07 11:25:05.135880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.717 [2024-10-07 11:25:05.153043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.717 [2024-10-07 11:25:05.153340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.717 [2024-10-07 11:25:05.167342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.717 [2024-10-07 11:25:05.167421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.717 [2024-10-07 11:25:05.183099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.717 [2024-10-07 11:25:05.183185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.717 [2024-10-07 11:25:05.202266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.717 [2024-10-07 11:25:05.202380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.717 [2024-10-07 11:25:05.216808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.717 [2024-10-07 11:25:05.217175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.717 [2024-10-07 11:25:05.233151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.717 [2024-10-07 11:25:05.233502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.975 [2024-10-07 11:25:05.250087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.975 [2024-10-07 11:25:05.250431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.975 [2024-10-07 11:25:05.267095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.975 [2024-10-07 11:25:05.267463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.975 [2024-10-07 11:25:05.283843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.975 [2024-10-07 11:25:05.284179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.975 [2024-10-07 11:25:05.299975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.975 [2024-10-07 11:25:05.300299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.975 [2024-10-07 11:25:05.315709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.975 [2024-10-07 11:25:05.316069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.975 [2024-10-07 11:25:05.331333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.975 [2024-10-07 11:25:05.331654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.975 [2024-10-07 11:25:05.347112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.975 [2024-10-07 11:25:05.347464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.975 [2024-10-07 11:25:05.363729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.975 [2024-10-07 11:25:05.364074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.975 [2024-10-07 11:25:05.379961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.975 [2024-10-07 11:25:05.380260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.975 [2024-10-07 11:25:05.396550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.975 [2024-10-07 11:25:05.396836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.975 [2024-10-07 11:25:05.413431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.975 [2024-10-07 11:25:05.413502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.975 [2024-10-07 11:25:05.430222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.975 [2024-10-07 11:25:05.430358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.975 [2024-10-07 11:25:05.447175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.975 [2024-10-07 11:25:05.447582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.975 [2024-10-07 11:25:05.463851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.975 [2024-10-07 11:25:05.463931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.975 [2024-10-07 11:25:05.482752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.975 [2024-10-07 11:25:05.483104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.975 [2024-10-07 11:25:05.498625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.976 [2024-10-07 11:25:05.498706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.233 [2024-10-07 11:25:05.518103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.233 [2024-10-07 11:25:05.518443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.233 [2024-10-07 11:25:05.533981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.233 [2024-10-07 11:25:05.534277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.233 [2024-10-07 11:25:05.550342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.233 [2024-10-07 11:25:05.550428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.233 [2024-10-07 11:25:05.569673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.233 [2024-10-07 11:25:05.569757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.233 [2024-10-07 11:25:05.585068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.233 [2024-10-07 11:25:05.585372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.233 [2024-10-07 11:25:05.601712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.233 [2024-10-07 11:25:05.601787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.233 [2024-10-07 11:25:05.620561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.233 [2024-10-07 11:25:05.620868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.233 [2024-10-07 11:25:05.635715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.233 [2024-10-07 11:25:05.636027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.233 [2024-10-07 11:25:05.652900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.233 [2024-10-07 11:25:05.652983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.233 [2024-10-07 11:25:05.668455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.233 [2024-10-07 11:25:05.668531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.233 [2024-10-07 11:25:05.678461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.233 [2024-10-07 11:25:05.678521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.233 [2024-10-07 11:25:05.694217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.233 [2024-10-07 11:25:05.694570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.233 [2024-10-07 11:25:05.710256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.234 [2024-10-07 11:25:05.710601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.234 [2024-10-07 11:25:05.721020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.234 [2024-10-07 11:25:05.721087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.234 [2024-10-07 11:25:05.733262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.234 [2024-10-07 11:25:05.733359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.234 [2024-10-07 11:25:05.748874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.234 [2024-10-07 11:25:05.748954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.492 [2024-10-07 11:25:05.763894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.492 [2024-10-07 11:25:05.763975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.492 [2024-10-07 11:25:05.779233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.492 [2024-10-07 11:25:05.779332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.492 [2024-10-07 11:25:05.789266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.492 [2024-10-07 11:25:05.789362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.492 [2024-10-07 11:25:05.805827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.492 [2024-10-07 11:25:05.805906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.492 [2024-10-07 11:25:05.816479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.492 [2024-10-07 11:25:05.816787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.492 [2024-10-07 11:25:05.828662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.492 [2024-10-07 11:25:05.828735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.492 [2024-10-07 11:25:05.844756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.492 [2024-10-07 11:25:05.844837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.492 [2024-10-07 11:25:05.860178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.492 [2024-10-07 11:25:05.860563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.492 [2024-10-07 11:25:05.872371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.492 [2024-10-07 11:25:05.872480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.492 [2024-10-07 11:25:05.887415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.492 [2024-10-07 11:25:05.887495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.492 [2024-10-07 11:25:05.903251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.492 [2024-10-07 11:25:05.903586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.492 [2024-10-07 11:25:05.920661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.492 [2024-10-07 11:25:05.920746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.492 [2024-10-07 11:25:05.937040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.492 [2024-10-07 11:25:05.937121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.492 [2024-10-07 11:25:05.954218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.492 [2024-10-07 11:25:05.954341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.492 [2024-10-07 11:25:05.969196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.492 [2024-10-07 11:25:05.969290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.492 [2024-10-07 11:25:05.985940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.492 [2024-10-07 11:25:05.986031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.492 [2024-10-07 11:25:06.001739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.492 [2024-10-07 11:25:06.001828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.492 [2024-10-07 11:25:06.011850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.492 [2024-10-07 11:25:06.011942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.750 [2024-10-07 11:25:06.027961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.750 [2024-10-07 11:25:06.028047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.750 [2024-10-07 11:25:06.039201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.750 [2024-10-07 11:25:06.039533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.750 10797.60 IOPS, 84.36 MiB/s [2024-10-07T11:25:06.273Z] [2024-10-07 11:25:06.057192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.750 [2024-10-07 11:25:06.057523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.750 00:14:10.750 Latency(us) 00:14:10.750 [2024-10-07T11:25:06.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.750 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:10.750 Nvme1n1 : 5.01 10797.83 84.36 0.00 0.00 11839.36 4915.20 21209.83 00:14:10.750 [2024-10-07T11:25:06.273Z] =================================================================================================================== 00:14:10.750 [2024-10-07T11:25:06.273Z] Total : 10797.83 84.36 0.00 0.00 11839.36 4915.20 21209.83 00:14:10.750 [2024-10-07 11:25:06.068527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.750 [2024-10-07 11:25:06.068859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.750 [2024-10-07 11:25:06.080501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.750 [2024-10-07 11:25:06.080799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.750 [2024-10-07 11:25:06.092514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.750 [2024-10-07 11:25:06.092591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.750 [2024-10-07 11:25:06.104619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.750 [2024-10-07 11:25:06.104740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.750 [2024-10-07 11:25:06.116536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.750 [2024-10-07 11:25:06.116620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.751 [2024-10-07 11:25:06.128535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.751 [2024-10-07 11:25:06.128612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.751 [2024-10-07 11:25:06.140522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.751 [2024-10-07 11:25:06.140592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.751 [2024-10-07 11:25:06.152530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.751 [2024-10-07 11:25:06.152596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.751 [2024-10-07 11:25:06.164545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.751 [2024-10-07 11:25:06.164629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.751 [2024-10-07 11:25:06.176539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.751 [2024-10-07 11:25:06.176613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.751 [2024-10-07 11:25:06.188548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.751 [2024-10-07 11:25:06.188625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.751 [2024-10-07 11:25:06.200549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.751 [2024-10-07 11:25:06.200621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.751 [2024-10-07 11:25:06.212561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.751 [2024-10-07 11:25:06.212638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.751 [2024-10-07 11:25:06.224557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.751 [2024-10-07 11:25:06.224630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.751 [2024-10-07 11:25:06.236571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.751 [2024-10-07 11:25:06.236655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.751 [2024-10-07 11:25:06.248573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.751 [2024-10-07 11:25:06.248651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.751 [2024-10-07 11:25:06.260597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.751 [2024-10-07 11:25:06.260678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.751 [2024-10-07 11:25:06.272615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.751 [2024-10-07 11:25:06.272700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.010 [2024-10-07 11:25:06.284588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.010 [2024-10-07 11:25:06.284670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.010 [2024-10-07 11:25:06.296578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.010 [2024-10-07 11:25:06.296654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.010 [2024-10-07 11:25:06.311667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.010 [2024-10-07 11:25:06.311756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.010 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65720) - No such process 00:14:11.010 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65720 00:14:11.010 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.010 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.010 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:11.010 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.010 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:11.010 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.010 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:11.010 delay0 00:14:11.010 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.010 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:11.010 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.010 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:11.010 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.010 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:14:11.010 [2024-10-07 11:25:06.508169] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:17.633 Initializing NVMe Controllers 00:14:17.633 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:17.633 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:17.633 Initialization complete. Launching workers. 00:14:17.633 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 93 00:14:17.633 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 380, failed to submit 33 00:14:17.633 success 283, unsuccessful 97, failed 0 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:17.633 rmmod nvme_tcp 00:14:17.633 rmmod nvme_fabrics 00:14:17.633 rmmod nvme_keyring 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 65564 ']' 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 65564 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 65564 ']' 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 65564 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65564 00:14:17.633 killing process with pid 65564 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65564' 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 65564 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 65564 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:17.633 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:17.633 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:17.633 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:17.633 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:17.633 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:17.633 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:17.633 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:17.633 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:17.633 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:17.633 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:17.633 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:17.891 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:17.892 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.892 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.892 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.892 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:14:17.892 ************************************ 00:14:17.892 END TEST nvmf_zcopy 00:14:17.892 ************************************ 00:14:17.892 00:14:17.892 real 0m25.387s 00:14:17.892 user 0m40.849s 00:14:17.892 sys 0m7.174s 00:14:17.892 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.892 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:17.892 11:25:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:17.892 11:25:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:17.892 11:25:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.892 11:25:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:17.892 ************************************ 00:14:17.892 START TEST nvmf_nmic 00:14:17.892 ************************************ 00:14:17.892 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:17.892 * Looking for test storage... 00:14:17.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:17.892 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:17.892 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:14:17.892 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:18.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.151 --rc genhtml_branch_coverage=1 00:14:18.151 --rc genhtml_function_coverage=1 00:14:18.151 --rc genhtml_legend=1 00:14:18.151 --rc geninfo_all_blocks=1 00:14:18.151 --rc geninfo_unexecuted_blocks=1 00:14:18.151 00:14:18.151 ' 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:18.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.151 --rc genhtml_branch_coverage=1 00:14:18.151 --rc genhtml_function_coverage=1 00:14:18.151 --rc genhtml_legend=1 00:14:18.151 --rc geninfo_all_blocks=1 00:14:18.151 --rc geninfo_unexecuted_blocks=1 00:14:18.151 00:14:18.151 ' 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:18.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.151 --rc genhtml_branch_coverage=1 00:14:18.151 --rc genhtml_function_coverage=1 00:14:18.151 --rc genhtml_legend=1 00:14:18.151 --rc geninfo_all_blocks=1 00:14:18.151 --rc geninfo_unexecuted_blocks=1 00:14:18.151 00:14:18.151 ' 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:18.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.151 --rc genhtml_branch_coverage=1 00:14:18.151 --rc genhtml_function_coverage=1 00:14:18.151 --rc genhtml_legend=1 00:14:18.151 --rc geninfo_all_blocks=1 00:14:18.151 --rc geninfo_unexecuted_blocks=1 00:14:18.151 00:14:18.151 ' 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:14:18.151 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:18.152 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # nvmf_veth_init 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:18.152 Cannot find device "nvmf_init_br" 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:18.152 Cannot find device "nvmf_init_br2" 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:18.152 Cannot find device "nvmf_tgt_br" 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:18.152 Cannot find device "nvmf_tgt_br2" 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:18.152 Cannot find device "nvmf_init_br" 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:18.152 Cannot find device "nvmf_init_br2" 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:18.152 Cannot find device "nvmf_tgt_br" 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:18.152 Cannot find device "nvmf_tgt_br2" 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:18.152 Cannot find device "nvmf_br" 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:18.152 Cannot find device "nvmf_init_if" 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:18.152 Cannot find device "nvmf_init_if2" 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:18.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:18.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:18.152 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:18.411 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:18.411 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:18.411 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:14:18.411 00:14:18.411 --- 10.0.0.3 ping statistics --- 00:14:18.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.412 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:18.412 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:18.412 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:14:18.412 00:14:18.412 --- 10.0.0.4 ping statistics --- 00:14:18.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.412 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:18.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:18.412 00:14:18.412 --- 10.0.0.1 ping statistics --- 00:14:18.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.412 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:18.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:14:18.412 00:14:18.412 --- 10.0.0.2 ping statistics --- 00:14:18.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.412 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # return 0 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=66105 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 66105 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 66105 ']' 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.412 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:18.412 [2024-10-07 11:25:13.929135] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:14:18.412 [2024-10-07 11:25:13.929211] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.670 [2024-10-07 11:25:14.066210] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:18.670 [2024-10-07 11:25:14.186456] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.670 [2024-10-07 11:25:14.186512] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.670 [2024-10-07 11:25:14.186526] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.670 [2024-10-07 11:25:14.186536] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.670 [2024-10-07 11:25:14.186546] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.670 [2024-10-07 11:25:14.187882] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.670 [2024-10-07 11:25:14.187979] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.670 [2024-10-07 11:25:14.188136] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:18.670 [2024-10-07 11:25:14.188143] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.955 [2024-10-07 11:25:14.245487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:19.522 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.522 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:14:19.522 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:19.522 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:19.523 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:19.523 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.523 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:19.523 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.523 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:19.523 [2024-10-07 11:25:14.980839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.523 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.523 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:19.523 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.523 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:19.523 Malloc0 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:19.523 [2024-10-07 11:25:15.031675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.523 test case1: single bdev can't be used in multiple subsystems 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.523 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:19.781 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.781 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:19.781 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:19.781 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.781 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:19.781 [2024-10-07 11:25:15.055536] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:19.781 [2024-10-07 11:25:15.055571] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:19.781 [2024-10-07 11:25:15.055582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:19.781 request: 00:14:19.781 { 00:14:19.781 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:19.781 "namespace": { 00:14:19.781 "bdev_name": "Malloc0", 00:14:19.781 "no_auto_visible": false 00:14:19.781 }, 00:14:19.781 "method": "nvmf_subsystem_add_ns", 00:14:19.781 "req_id": 1 00:14:19.781 } 00:14:19.781 Got JSON-RPC error response 00:14:19.781 response: 00:14:19.781 { 00:14:19.781 "code": -32602, 00:14:19.781 "message": "Invalid parameters" 00:14:19.781 } 00:14:19.781 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:19.781 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:19.781 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:19.781 Adding namespace failed - expected result. 00:14:19.781 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:19.781 test case2: host connect to nvmf target in multiple paths 00:14:19.781 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:19.781 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:19.781 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.781 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:19.781 [2024-10-07 11:25:15.067657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:19.781 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.781 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid=8f4e03b1-7080-439e-b116-202a2cecf6a1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:19.781 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid=8f4e03b1-7080-439e-b116-202a2cecf6a1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:14:20.039 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:20.039 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:14:20.039 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.039 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:20.039 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:14:21.969 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:21.969 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:21.969 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.969 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:21.969 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.969 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:14:21.969 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:21.969 [global] 00:14:21.969 thread=1 00:14:21.969 invalidate=1 00:14:21.969 rw=write 00:14:21.969 time_based=1 00:14:21.969 runtime=1 00:14:21.969 ioengine=libaio 00:14:21.969 direct=1 00:14:21.969 bs=4096 00:14:21.969 iodepth=1 00:14:21.969 norandommap=0 00:14:21.969 numjobs=1 00:14:21.969 00:14:21.969 verify_dump=1 00:14:21.969 verify_backlog=512 00:14:21.969 verify_state_save=0 00:14:21.969 do_verify=1 00:14:21.969 verify=crc32c-intel 00:14:21.969 [job0] 00:14:21.969 filename=/dev/nvme0n1 00:14:21.969 Could not set queue depth (nvme0n1) 00:14:22.226 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:22.226 fio-3.35 00:14:22.226 Starting 1 thread 00:14:23.158 00:14:23.158 job0: (groupid=0, jobs=1): err= 0: pid=66202: Mon Oct 7 11:25:18 2024 00:14:23.158 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:14:23.158 slat (nsec): min=11774, max=55732, avg=15590.93, stdev=4928.43 00:14:23.158 clat (usec): min=139, max=534, avg=170.04, stdev=17.36 00:14:23.158 lat (usec): min=153, max=547, avg=185.63, stdev=19.29 00:14:23.158 clat percentiles (usec): 00:14:23.158 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:14:23.158 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:14:23.158 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 198], 00:14:23.158 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 367], 99.95th=[ 379], 00:14:23.158 | 99.99th=[ 537] 00:14:23.158 write: IOPS=3200, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec); 0 zone resets 00:14:23.158 slat (nsec): min=15250, max=86455, avg=22402.54, stdev=7164.84 00:14:23.158 clat (usec): min=86, max=419, avg=108.18, stdev=16.02 00:14:23.158 lat (usec): min=104, max=438, avg=130.58, stdev=19.61 00:14:23.158 clat percentiles (usec): 00:14:23.158 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 98], 00:14:23.158 | 30.00th=[ 101], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 108], 00:14:23.158 | 70.00th=[ 112], 80.00th=[ 118], 90.00th=[ 127], 95.00th=[ 135], 00:14:23.158 | 99.00th=[ 151], 99.50th=[ 180], 99.90th=[ 281], 99.95th=[ 306], 00:14:23.158 | 99.99th=[ 420] 00:14:23.158 bw ( KiB/s): min=12288, max=12288, per=95.98%, avg=12288.00, stdev= 0.00, samples=1 00:14:23.158 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:23.158 lat (usec) : 100=14.37%, 250=85.47%, 500=0.14%, 750=0.02% 00:14:23.158 cpu : usr=3.00%, sys=9.00%, ctx=6276, majf=0, minf=5 00:14:23.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:23.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.158 issued rwts: total=3072,3204,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:23.158 00:14:23.158 Run status group 0 (all jobs): 00:14:23.158 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:14:23.158 WRITE: bw=12.5MiB/s (13.1MB/s), 12.5MiB/s-12.5MiB/s (13.1MB/s-13.1MB/s), io=12.5MiB (13.1MB), run=1001-1001msec 00:14:23.158 00:14:23.158 Disk stats (read/write): 00:14:23.158 nvme0n1: ios=2662/3072, merge=0/0, ticks=482/357, in_queue=839, util=91.39% 00:14:23.158 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:23.462 rmmod nvme_tcp 00:14:23.462 rmmod nvme_fabrics 00:14:23.462 rmmod nvme_keyring 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 66105 ']' 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 66105 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 66105 ']' 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 66105 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66105 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66105' 00:14:23.462 killing process with pid 66105 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 66105 00:14:23.462 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 66105 00:14:23.720 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:23.720 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:23.720 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:23.720 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:14:23.720 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:14:23.720 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:23.720 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:14:23.720 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:23.720 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:23.720 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:23.720 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:23.720 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:23.720 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:23.720 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:23.720 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:23.720 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:23.720 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:23.720 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:14:23.977 00:14:23.977 real 0m6.085s 00:14:23.977 user 0m18.609s 00:14:23.977 sys 0m2.308s 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:23.977 ************************************ 00:14:23.977 END TEST nvmf_nmic 00:14:23.977 ************************************ 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:23.977 ************************************ 00:14:23.977 START TEST nvmf_fio_target 00:14:23.977 ************************************ 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:23.977 * Looking for test storage... 00:14:23.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:14:23.977 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:24.235 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:24.235 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:24.235 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:24.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.236 --rc genhtml_branch_coverage=1 00:14:24.236 --rc genhtml_function_coverage=1 00:14:24.236 --rc genhtml_legend=1 00:14:24.236 --rc geninfo_all_blocks=1 00:14:24.236 --rc geninfo_unexecuted_blocks=1 00:14:24.236 00:14:24.236 ' 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:24.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.236 --rc genhtml_branch_coverage=1 00:14:24.236 --rc genhtml_function_coverage=1 00:14:24.236 --rc genhtml_legend=1 00:14:24.236 --rc geninfo_all_blocks=1 00:14:24.236 --rc geninfo_unexecuted_blocks=1 00:14:24.236 00:14:24.236 ' 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:24.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.236 --rc genhtml_branch_coverage=1 00:14:24.236 --rc genhtml_function_coverage=1 00:14:24.236 --rc genhtml_legend=1 00:14:24.236 --rc geninfo_all_blocks=1 00:14:24.236 --rc geninfo_unexecuted_blocks=1 00:14:24.236 00:14:24.236 ' 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:24.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.236 --rc genhtml_branch_coverage=1 00:14:24.236 --rc genhtml_function_coverage=1 00:14:24.236 --rc genhtml_legend=1 00:14:24.236 --rc geninfo_all_blocks=1 00:14:24.236 --rc geninfo_unexecuted_blocks=1 00:14:24.236 00:14:24.236 ' 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:24.236 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:14:24.236 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:24.237 Cannot find device "nvmf_init_br" 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:24.237 Cannot find device "nvmf_init_br2" 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:24.237 Cannot find device "nvmf_tgt_br" 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:24.237 Cannot find device "nvmf_tgt_br2" 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:24.237 Cannot find device "nvmf_init_br" 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:24.237 Cannot find device "nvmf_init_br2" 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:24.237 Cannot find device "nvmf_tgt_br" 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:24.237 Cannot find device "nvmf_tgt_br2" 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:24.237 Cannot find device "nvmf_br" 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:24.237 Cannot find device "nvmf_init_if" 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:24.237 Cannot find device "nvmf_init_if2" 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:24.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:24.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:14:24.237 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:24.495 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:24.495 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:24.495 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:24.495 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:24.495 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:24.495 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:24.495 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:24.495 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:24.496 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:24.496 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:24.496 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:24.496 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:24.496 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:24.496 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:14:24.496 00:14:24.496 --- 10.0.0.3 ping statistics --- 00:14:24.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.496 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:24.496 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:24.496 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:24.496 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:14:24.496 00:14:24.496 --- 10.0.0.4 ping statistics --- 00:14:24.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.496 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:24.496 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:24.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:24.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:24.496 00:14:24.496 --- 10.0.0.1 ping statistics --- 00:14:24.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.496 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:24.496 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:24.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:24.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:14:24.754 00:14:24.754 --- 10.0.0.2 ping statistics --- 00:14:24.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.754 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # return 0 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=66434 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 66434 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 66434 ']' 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:24.754 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.754 [2024-10-07 11:25:20.105894] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:14:24.754 [2024-10-07 11:25:20.105994] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.754 [2024-10-07 11:25:20.248660] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:25.012 [2024-10-07 11:25:20.396328] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.012 [2024-10-07 11:25:20.396383] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.012 [2024-10-07 11:25:20.396397] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.012 [2024-10-07 11:25:20.396407] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.012 [2024-10-07 11:25:20.396417] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.012 [2024-10-07 11:25:20.397722] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.012 [2024-10-07 11:25:20.397812] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.012 [2024-10-07 11:25:20.397974] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.012 [2024-10-07 11:25:20.397980] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.012 [2024-10-07 11:25:20.455335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:25.578 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:25.578 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:14:25.578 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:25.578 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:25.578 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.836 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.836 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:26.094 [2024-10-07 11:25:21.380053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.094 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:26.354 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:26.354 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:26.612 11:25:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:26.612 11:25:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:26.871 11:25:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:26.871 11:25:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:27.129 11:25:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:27.129 11:25:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:27.387 11:25:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:27.645 11:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:27.645 11:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:27.904 11:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:27.904 11:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:28.471 11:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:28.471 11:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:28.471 11:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:29.037 11:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:29.037 11:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:29.037 11:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:29.037 11:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.296 11:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:29.555 [2024-10-07 11:25:25.032181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:29.555 11:25:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:29.814 11:25:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:30.072 11:25:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid=8f4e03b1-7080-439e-b116-202a2cecf6a1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:30.330 11:25:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:30.330 11:25:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:14:30.330 11:25:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:30.330 11:25:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:14:30.330 11:25:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:14:30.330 11:25:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:14:32.231 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:32.231 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:32.231 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:32.231 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:14:32.231 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:32.231 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:14:32.231 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:32.231 [global] 00:14:32.231 thread=1 00:14:32.231 invalidate=1 00:14:32.231 rw=write 00:14:32.231 time_based=1 00:14:32.231 runtime=1 00:14:32.231 ioengine=libaio 00:14:32.231 direct=1 00:14:32.231 bs=4096 00:14:32.231 iodepth=1 00:14:32.231 norandommap=0 00:14:32.231 numjobs=1 00:14:32.231 00:14:32.490 verify_dump=1 00:14:32.490 verify_backlog=512 00:14:32.490 verify_state_save=0 00:14:32.490 do_verify=1 00:14:32.490 verify=crc32c-intel 00:14:32.490 [job0] 00:14:32.490 filename=/dev/nvme0n1 00:14:32.490 [job1] 00:14:32.490 filename=/dev/nvme0n2 00:14:32.490 [job2] 00:14:32.490 filename=/dev/nvme0n3 00:14:32.490 [job3] 00:14:32.490 filename=/dev/nvme0n4 00:14:32.490 Could not set queue depth (nvme0n1) 00:14:32.490 Could not set queue depth (nvme0n2) 00:14:32.490 Could not set queue depth (nvme0n3) 00:14:32.490 Could not set queue depth (nvme0n4) 00:14:32.490 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:32.490 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:32.490 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:32.490 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:32.490 fio-3.35 00:14:32.490 Starting 4 threads 00:14:33.867 00:14:33.867 job0: (groupid=0, jobs=1): err= 0: pid=66624: Mon Oct 7 11:25:29 2024 00:14:33.867 read: IOPS=1523, BW=6094KiB/s (6240kB/s)(6100KiB/1001msec) 00:14:33.867 slat (nsec): min=14163, max=75146, avg=20247.06, stdev=6232.41 00:14:33.867 clat (usec): min=265, max=736, avg=353.95, stdev=75.59 00:14:33.867 lat (usec): min=299, max=764, avg=374.20, stdev=78.21 00:14:33.867 clat percentiles (usec): 00:14:33.867 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 318], 00:14:33.867 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 338], 00:14:33.867 | 70.00th=[ 343], 80.00th=[ 351], 90.00th=[ 445], 95.00th=[ 529], 00:14:33.867 | 99.00th=[ 660], 99.50th=[ 668], 99.90th=[ 693], 99.95th=[ 734], 00:14:33.867 | 99.99th=[ 734] 00:14:33.867 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:33.867 slat (usec): min=17, max=106, avg=31.33, stdev= 8.05 00:14:33.867 clat (usec): min=93, max=953, avg=243.31, stdev=39.15 00:14:33.867 lat (usec): min=115, max=989, avg=274.64, stdev=41.41 00:14:33.867 clat percentiles (usec): 00:14:33.867 | 1.00th=[ 109], 5.00th=[ 178], 10.00th=[ 206], 20.00th=[ 233], 00:14:33.867 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:14:33.867 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:14:33.867 | 99.00th=[ 302], 99.50th=[ 338], 99.90th=[ 519], 99.95th=[ 955], 00:14:33.867 | 99.99th=[ 955] 00:14:33.867 bw ( KiB/s): min= 8192, max= 8192, per=26.00%, avg=8192.00, stdev= 0.00, samples=1 00:14:33.867 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:33.867 lat (usec) : 100=0.10%, 250=28.78%, 500=68.21%, 750=2.87%, 1000=0.03% 00:14:33.867 cpu : usr=1.50%, sys=6.50%, ctx=3061, majf=0, minf=7 00:14:33.867 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:33.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.867 issued rwts: total=1525,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:33.867 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:33.867 job1: (groupid=0, jobs=1): err= 0: pid=66625: Mon Oct 7 11:25:29 2024 00:14:33.867 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:14:33.867 slat (nsec): min=8112, max=42804, avg=13162.60, stdev=3010.92 00:14:33.867 clat (usec): min=270, max=552, avg=337.65, stdev=21.36 00:14:33.867 lat (usec): min=283, max=577, avg=350.81, stdev=21.17 00:14:33.867 clat percentiles (usec): 00:14:33.867 | 1.00th=[ 285], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 322], 00:14:33.867 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 338], 00:14:33.867 | 70.00th=[ 347], 80.00th=[ 351], 90.00th=[ 359], 95.00th=[ 371], 00:14:33.867 | 99.00th=[ 412], 99.50th=[ 429], 99.90th=[ 465], 99.95th=[ 553], 00:14:33.867 | 99.99th=[ 553] 00:14:33.867 write: IOPS=1636, BW=6545KiB/s (6703kB/s)(6552KiB/1001msec); 0 zone resets 00:14:33.867 slat (usec): min=10, max=101, avg=19.07, stdev= 6.79 00:14:33.867 clat (usec): min=158, max=2112, avg=259.46, stdev=55.49 00:14:33.867 lat (usec): min=186, max=2127, avg=278.54, stdev=56.94 00:14:33.867 clat percentiles (usec): 00:14:33.867 | 1.00th=[ 182], 5.00th=[ 198], 10.00th=[ 221], 20.00th=[ 243], 00:14:33.867 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:14:33.867 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 297], 00:14:33.867 | 99.00th=[ 375], 99.50th=[ 396], 99.90th=[ 490], 99.95th=[ 2114], 00:14:33.867 | 99.99th=[ 2114] 00:14:33.867 bw ( KiB/s): min= 8192, max= 8192, per=26.00%, avg=8192.00, stdev= 0.00, samples=1 00:14:33.867 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:33.867 lat (usec) : 250=15.75%, 500=84.18%, 750=0.03% 00:14:33.867 lat (msec) : 4=0.03% 00:14:33.867 cpu : usr=1.60%, sys=3.90%, ctx=3174, majf=0, minf=6 00:14:33.867 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:33.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.867 issued rwts: total=1536,1638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:33.867 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:33.867 job2: (groupid=0, jobs=1): err= 0: pid=66626: Mon Oct 7 11:25:29 2024 00:14:33.867 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:14:33.867 slat (nsec): min=7922, max=36613, avg=12010.94, stdev=2676.93 00:14:33.867 clat (usec): min=273, max=729, avg=339.07, stdev=22.18 00:14:33.867 lat (usec): min=285, max=757, avg=351.08, stdev=22.50 00:14:33.867 clat percentiles (usec): 00:14:33.867 | 1.00th=[ 297], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 326], 00:14:33.867 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 343], 00:14:33.867 | 70.00th=[ 347], 80.00th=[ 351], 90.00th=[ 359], 95.00th=[ 371], 00:14:33.867 | 99.00th=[ 412], 99.50th=[ 420], 99.90th=[ 498], 99.95th=[ 734], 00:14:33.867 | 99.99th=[ 734] 00:14:33.867 write: IOPS=1636, BW=6545KiB/s (6703kB/s)(6552KiB/1001msec); 0 zone resets 00:14:33.867 slat (nsec): min=10884, max=67571, avg=21452.10, stdev=5867.66 00:14:33.867 clat (usec): min=112, max=2210, avg=256.76, stdev=58.36 00:14:33.867 lat (usec): min=142, max=2237, avg=278.22, stdev=59.21 00:14:33.867 clat percentiles (usec): 00:14:33.867 | 1.00th=[ 178], 5.00th=[ 194], 10.00th=[ 217], 20.00th=[ 241], 00:14:33.867 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:14:33.867 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 293], 00:14:33.867 | 99.00th=[ 379], 99.50th=[ 408], 99.90th=[ 586], 99.95th=[ 2212], 00:14:33.867 | 99.99th=[ 2212] 00:14:33.867 bw ( KiB/s): min= 8192, max= 8192, per=26.00%, avg=8192.00, stdev= 0.00, samples=1 00:14:33.867 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:33.867 lat (usec) : 250=17.74%, 500=82.17%, 750=0.06% 00:14:33.867 lat (msec) : 4=0.03% 00:14:33.867 cpu : usr=1.70%, sys=4.10%, ctx=3174, majf=0, minf=11 00:14:33.867 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:33.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.867 issued rwts: total=1536,1638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:33.867 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:33.867 job3: (groupid=0, jobs=1): err= 0: pid=66627: Mon Oct 7 11:25:29 2024 00:14:33.867 read: IOPS=2969, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1000msec) 00:14:33.867 slat (usec): min=11, max=131, avg=13.61, stdev= 5.88 00:14:33.867 clat (usec): min=125, max=352, avg=168.58, stdev=13.89 00:14:33.867 lat (usec): min=155, max=367, avg=182.19, stdev=15.99 00:14:33.867 clat percentiles (usec): 00:14:33.867 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:14:33.867 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:14:33.867 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 194], 00:14:33.867 | 99.00th=[ 208], 99.50th=[ 223], 99.90th=[ 273], 99.95th=[ 285], 00:14:33.867 | 99.99th=[ 355] 00:14:33.867 write: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec); 0 zone resets 00:14:33.867 slat (nsec): min=13979, max=90447, avg=20137.11, stdev=4366.03 00:14:33.867 clat (usec): min=94, max=545, avg=126.10, stdev=20.38 00:14:33.867 lat (usec): min=119, max=564, avg=146.24, stdev=21.91 00:14:33.867 clat percentiles (usec): 00:14:33.867 | 1.00th=[ 103], 5.00th=[ 106], 10.00th=[ 110], 20.00th=[ 114], 00:14:33.867 | 30.00th=[ 117], 40.00th=[ 121], 50.00th=[ 125], 60.00th=[ 128], 00:14:33.867 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 149], 00:14:33.867 | 99.00th=[ 169], 99.50th=[ 237], 99.90th=[ 359], 99.95th=[ 445], 00:14:33.867 | 99.99th=[ 545] 00:14:33.867 bw ( KiB/s): min=12288, max=12288, per=39.00%, avg=12288.00, stdev= 0.00, samples=1 00:14:33.867 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:33.867 lat (usec) : 100=0.02%, 250=99.67%, 500=0.30%, 750=0.02% 00:14:33.867 cpu : usr=2.50%, sys=7.70%, ctx=6052, majf=0, minf=15 00:14:33.867 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:33.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.867 issued rwts: total=2969,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:33.867 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:33.867 00:14:33.867 Run status group 0 (all jobs): 00:14:33.867 READ: bw=29.5MiB/s (31.0MB/s), 6094KiB/s-11.6MiB/s (6240kB/s-12.2MB/s), io=29.6MiB (31.0MB), run=1000-1001msec 00:14:33.867 WRITE: bw=30.8MiB/s (32.3MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=30.8MiB (32.3MB), run=1000-1001msec 00:14:33.867 00:14:33.867 Disk stats (read/write): 00:14:33.867 nvme0n1: ios=1213/1536, merge=0/0, ticks=448/384, in_queue=832, util=88.38% 00:14:33.867 nvme0n2: ios=1246/1536, merge=0/0, ticks=421/368, in_queue=789, util=87.94% 00:14:33.867 nvme0n3: ios=1208/1536, merge=0/0, ticks=398/389, in_queue=787, util=89.13% 00:14:33.867 nvme0n4: ios=2560/2638, merge=0/0, ticks=437/346, in_queue=783, util=89.79% 00:14:33.868 11:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:33.868 [global] 00:14:33.868 thread=1 00:14:33.868 invalidate=1 00:14:33.868 rw=randwrite 00:14:33.868 time_based=1 00:14:33.868 runtime=1 00:14:33.868 ioengine=libaio 00:14:33.868 direct=1 00:14:33.868 bs=4096 00:14:33.868 iodepth=1 00:14:33.868 norandommap=0 00:14:33.868 numjobs=1 00:14:33.868 00:14:33.868 verify_dump=1 00:14:33.868 verify_backlog=512 00:14:33.868 verify_state_save=0 00:14:33.868 do_verify=1 00:14:33.868 verify=crc32c-intel 00:14:33.868 [job0] 00:14:33.868 filename=/dev/nvme0n1 00:14:33.868 [job1] 00:14:33.868 filename=/dev/nvme0n2 00:14:33.868 [job2] 00:14:33.868 filename=/dev/nvme0n3 00:14:33.868 [job3] 00:14:33.868 filename=/dev/nvme0n4 00:14:33.868 Could not set queue depth (nvme0n1) 00:14:33.868 Could not set queue depth (nvme0n2) 00:14:33.868 Could not set queue depth (nvme0n3) 00:14:33.868 Could not set queue depth (nvme0n4) 00:14:33.868 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:33.868 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:33.868 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:33.868 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:33.868 fio-3.35 00:14:33.868 Starting 4 threads 00:14:35.244 00:14:35.244 job0: (groupid=0, jobs=1): err= 0: pid=66685: Mon Oct 7 11:25:30 2024 00:14:35.244 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:14:35.244 slat (nsec): min=10894, max=68104, avg=14832.89, stdev=4095.79 00:14:35.244 clat (usec): min=129, max=1569, avg=159.60, stdev=28.34 00:14:35.244 lat (usec): min=141, max=1581, avg=174.43, stdev=28.90 00:14:35.244 clat percentiles (usec): 00:14:35.244 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 149], 00:14:35.244 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:14:35.244 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 182], 00:14:35.244 | 99.00th=[ 194], 99.50th=[ 202], 99.90th=[ 219], 99.95th=[ 310], 00:14:35.244 | 99.99th=[ 1565] 00:14:35.244 write: IOPS=3177, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1001msec); 0 zone resets 00:14:35.244 slat (usec): min=13, max=164, avg=21.33, stdev= 5.90 00:14:35.244 clat (usec): min=92, max=203, avg=121.14, stdev=10.55 00:14:35.244 lat (usec): min=109, max=276, avg=142.47, stdev=12.07 00:14:35.244 clat percentiles (usec): 00:14:35.244 | 1.00th=[ 101], 5.00th=[ 106], 10.00th=[ 110], 20.00th=[ 114], 00:14:35.244 | 30.00th=[ 116], 40.00th=[ 119], 50.00th=[ 121], 60.00th=[ 123], 00:14:35.244 | 70.00th=[ 126], 80.00th=[ 129], 90.00th=[ 135], 95.00th=[ 141], 00:14:35.244 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 165], 99.95th=[ 202], 00:14:35.244 | 99.99th=[ 204] 00:14:35.244 bw ( KiB/s): min=12288, max=12288, per=29.02%, avg=12288.00, stdev= 0.00, samples=1 00:14:35.244 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:35.244 lat (usec) : 100=0.50%, 250=99.47%, 500=0.02% 00:14:35.244 lat (msec) : 2=0.02% 00:14:35.244 cpu : usr=2.70%, sys=8.70%, ctx=6253, majf=0, minf=15 00:14:35.244 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:35.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.244 issued rwts: total=3072,3181,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.244 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:35.244 job1: (groupid=0, jobs=1): err= 0: pid=66686: Mon Oct 7 11:25:30 2024 00:14:35.244 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:14:35.244 slat (nsec): min=10977, max=30560, avg=12803.06, stdev=2251.97 00:14:35.244 clat (usec): min=133, max=320, avg=159.06, stdev=10.96 00:14:35.244 lat (usec): min=148, max=331, avg=171.86, stdev=11.60 00:14:35.244 clat percentiles (usec): 00:14:35.244 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:14:35.244 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:14:35.244 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 178], 00:14:35.244 | 99.00th=[ 190], 99.50th=[ 198], 99.90th=[ 210], 99.95th=[ 241], 00:14:35.244 | 99.99th=[ 322] 00:14:35.244 write: IOPS=3315, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1001msec); 0 zone resets 00:14:35.244 slat (nsec): min=13282, max=94773, avg=18658.52, stdev=3549.12 00:14:35.244 clat (usec): min=95, max=1057, avg=120.66, stdev=24.63 00:14:35.244 lat (usec): min=112, max=1088, avg=139.32, stdev=25.36 00:14:35.244 clat percentiles (usec): 00:14:35.244 | 1.00th=[ 98], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 110], 00:14:35.244 | 30.00th=[ 113], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 122], 00:14:35.244 | 70.00th=[ 126], 80.00th=[ 130], 90.00th=[ 137], 95.00th=[ 143], 00:14:35.244 | 99.00th=[ 161], 99.50th=[ 229], 99.90th=[ 351], 99.95th=[ 408], 00:14:35.244 | 99.99th=[ 1057] 00:14:35.244 bw ( KiB/s): min=13168, max=13168, per=31.10%, avg=13168.00, stdev= 0.00, samples=1 00:14:35.244 iops : min= 3292, max= 3292, avg=3292.00, stdev= 0.00, samples=1 00:14:35.244 lat (usec) : 100=1.46%, 250=98.34%, 500=0.19% 00:14:35.244 lat (msec) : 2=0.02% 00:14:35.244 cpu : usr=2.00%, sys=8.20%, ctx=6393, majf=0, minf=9 00:14:35.244 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:35.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.244 issued rwts: total=3072,3319,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.244 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:35.244 job2: (groupid=0, jobs=1): err= 0: pid=66687: Mon Oct 7 11:25:30 2024 00:14:35.244 read: IOPS=1712, BW=6849KiB/s (7014kB/s)(6856KiB/1001msec) 00:14:35.244 slat (usec): min=12, max=323, avg=17.71, stdev= 9.16 00:14:35.244 clat (usec): min=173, max=2125, avg=292.68, stdev=76.86 00:14:35.244 lat (usec): min=201, max=2147, avg=310.40, stdev=79.92 00:14:35.244 clat percentiles (usec): 00:14:35.244 | 1.00th=[ 241], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 262], 00:14:35.244 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:14:35.244 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 322], 95.00th=[ 486], 00:14:35.244 | 99.00th=[ 529], 99.50th=[ 537], 99.90th=[ 562], 99.95th=[ 2114], 00:14:35.244 | 99.99th=[ 2114] 00:14:35.244 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:35.244 slat (nsec): min=14658, max=89656, avg=24048.97, stdev=6363.61 00:14:35.244 clat (usec): min=106, max=1218, avg=200.81, stdev=44.48 00:14:35.244 lat (usec): min=125, max=1245, avg=224.86, stdev=44.83 00:14:35.244 clat percentiles (usec): 00:14:35.244 | 1.00th=[ 115], 5.00th=[ 129], 10.00th=[ 141], 20.00th=[ 188], 00:14:35.244 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:14:35.244 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 243], 00:14:35.244 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 717], 99.95th=[ 725], 00:14:35.244 | 99.99th=[ 1221] 00:14:35.244 bw ( KiB/s): min= 8192, max= 8192, per=19.35%, avg=8192.00, stdev= 0.00, samples=1 00:14:35.244 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:35.244 lat (usec) : 250=55.02%, 500=43.30%, 750=1.62% 00:14:35.244 lat (msec) : 2=0.03%, 4=0.03% 00:14:35.244 cpu : usr=2.10%, sys=5.90%, ctx=3762, majf=0, minf=7 00:14:35.244 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:35.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.244 issued rwts: total=1714,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.244 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:35.244 job3: (groupid=0, jobs=1): err= 0: pid=66688: Mon Oct 7 11:25:30 2024 00:14:35.244 read: IOPS=1680, BW=6721KiB/s (6883kB/s)(6728KiB/1001msec) 00:14:35.244 slat (nsec): min=11893, max=40245, avg=14302.62, stdev=3348.91 00:14:35.244 clat (usec): min=151, max=554, avg=283.86, stdev=39.95 00:14:35.244 lat (usec): min=165, max=568, avg=298.17, stdev=41.11 00:14:35.244 clat percentiles (usec): 00:14:35.244 | 1.00th=[ 217], 5.00th=[ 253], 10.00th=[ 260], 20.00th=[ 265], 00:14:35.244 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:14:35.244 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 375], 00:14:35.244 | 99.00th=[ 478], 99.50th=[ 502], 99.90th=[ 553], 99.95th=[ 553], 00:14:35.244 | 99.99th=[ 553] 00:14:35.244 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:35.244 slat (usec): min=18, max=104, avg=24.19, stdev= 8.97 00:14:35.244 clat (usec): min=104, max=3465, avg=215.76, stdev=93.48 00:14:35.244 lat (usec): min=133, max=3500, avg=239.95, stdev=96.98 00:14:35.244 clat percentiles (usec): 00:14:35.244 | 1.00th=[ 122], 5.00th=[ 133], 10.00th=[ 143], 20.00th=[ 192], 00:14:35.244 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:14:35.244 | 70.00th=[ 221], 80.00th=[ 229], 90.00th=[ 258], 95.00th=[ 338], 00:14:35.244 | 99.00th=[ 383], 99.50th=[ 408], 99.90th=[ 1156], 99.95th=[ 1221], 00:14:35.244 | 99.99th=[ 3458] 00:14:35.244 bw ( KiB/s): min= 8192, max= 8192, per=19.35%, avg=8192.00, stdev= 0.00, samples=1 00:14:35.244 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:35.244 lat (usec) : 250=50.13%, 500=49.54%, 750=0.24% 00:14:35.244 lat (msec) : 2=0.05%, 4=0.03% 00:14:35.244 cpu : usr=1.70%, sys=5.70%, ctx=3730, majf=0, minf=13 00:14:35.244 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:35.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.244 issued rwts: total=1682,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.244 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:35.244 00:14:35.244 Run status group 0 (all jobs): 00:14:35.244 READ: bw=37.2MiB/s (39.0MB/s), 6721KiB/s-12.0MiB/s (6883kB/s-12.6MB/s), io=37.3MiB (39.1MB), run=1001-1001msec 00:14:35.244 WRITE: bw=41.3MiB/s (43.4MB/s), 8184KiB/s-13.0MiB/s (8380kB/s-13.6MB/s), io=41.4MiB (43.4MB), run=1001-1001msec 00:14:35.244 00:14:35.244 Disk stats (read/write): 00:14:35.244 nvme0n1: ios=2610/2783, merge=0/0, ticks=470/349, in_queue=819, util=88.08% 00:14:35.244 nvme0n2: ios=2603/2930, merge=0/0, ticks=428/376, in_queue=804, util=87.54% 00:14:35.244 nvme0n3: ios=1536/1679, merge=0/0, ticks=454/344, in_queue=798, util=88.89% 00:14:35.244 nvme0n4: ios=1536/1615, merge=0/0, ticks=435/362, in_queue=797, util=89.32% 00:14:35.244 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:35.244 [global] 00:14:35.244 thread=1 00:14:35.244 invalidate=1 00:14:35.244 rw=write 00:14:35.244 time_based=1 00:14:35.244 runtime=1 00:14:35.244 ioengine=libaio 00:14:35.244 direct=1 00:14:35.244 bs=4096 00:14:35.244 iodepth=128 00:14:35.244 norandommap=0 00:14:35.244 numjobs=1 00:14:35.244 00:14:35.244 verify_dump=1 00:14:35.244 verify_backlog=512 00:14:35.244 verify_state_save=0 00:14:35.244 do_verify=1 00:14:35.244 verify=crc32c-intel 00:14:35.244 [job0] 00:14:35.244 filename=/dev/nvme0n1 00:14:35.244 [job1] 00:14:35.244 filename=/dev/nvme0n2 00:14:35.244 [job2] 00:14:35.244 filename=/dev/nvme0n3 00:14:35.244 [job3] 00:14:35.244 filename=/dev/nvme0n4 00:14:35.244 Could not set queue depth (nvme0n1) 00:14:35.244 Could not set queue depth (nvme0n2) 00:14:35.244 Could not set queue depth (nvme0n3) 00:14:35.245 Could not set queue depth (nvme0n4) 00:14:35.245 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:35.245 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:35.245 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:35.245 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:35.245 fio-3.35 00:14:35.245 Starting 4 threads 00:14:36.619 00:14:36.619 job0: (groupid=0, jobs=1): err= 0: pid=66749: Mon Oct 7 11:25:31 2024 00:14:36.619 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:14:36.619 slat (usec): min=6, max=4357, avg=84.55, stdev=399.65 00:14:36.619 clat (usec): min=8429, max=13689, avg=11393.25, stdev=589.46 00:14:36.619 lat (usec): min=10327, max=13699, avg=11477.80, stdev=440.37 00:14:36.619 clat percentiles (usec): 00:14:36.619 | 1.00th=[ 8979], 5.00th=[10814], 10.00th=[10945], 20.00th=[11076], 00:14:36.619 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11469], 00:14:36.619 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11863], 95.00th=[11994], 00:14:36.619 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13698], 99.95th=[13698], 00:14:36.619 | 99.99th=[13698] 00:14:36.619 write: IOPS=5781, BW=22.6MiB/s (23.7MB/s)(22.6MiB/1002msec); 0 zone resets 00:14:36.619 slat (usec): min=10, max=2612, avg=83.11, stdev=349.43 00:14:36.619 clat (usec): min=197, max=11924, avg=10799.44, stdev=916.21 00:14:36.619 lat (usec): min=2125, max=11945, avg=10882.55, stdev=846.93 00:14:36.619 clat percentiles (usec): 00:14:36.619 | 1.00th=[ 5473], 5.00th=[10159], 10.00th=[10421], 20.00th=[10683], 00:14:36.619 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:14:36.619 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11338], 95.00th=[11469], 00:14:36.619 | 99.00th=[11731], 99.50th=[11731], 99.90th=[11863], 99.95th=[11863], 00:14:36.619 | 99.99th=[11863] 00:14:36.619 bw ( KiB/s): min=20744, max=24576, per=34.18%, avg=22660.00, stdev=2709.63, samples=2 00:14:36.619 iops : min= 5186, max= 6144, avg=5665.00, stdev=677.41, samples=2 00:14:36.619 lat (usec) : 250=0.01% 00:14:36.619 lat (msec) : 4=0.28%, 10=3.64%, 20=96.07% 00:14:36.619 cpu : usr=4.80%, sys=14.79%, ctx=362, majf=0, minf=17 00:14:36.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:36.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:36.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:36.619 issued rwts: total=5632,5793,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:36.619 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:36.619 job1: (groupid=0, jobs=1): err= 0: pid=66750: Mon Oct 7 11:25:31 2024 00:14:36.619 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:14:36.619 slat (usec): min=7, max=7641, avg=193.33, stdev=649.49 00:14:36.619 clat (usec): min=18084, max=39013, avg=25304.34, stdev=2830.77 00:14:36.619 lat (usec): min=19494, max=39031, avg=25497.68, stdev=2778.51 00:14:36.619 clat percentiles (usec): 00:14:36.619 | 1.00th=[20317], 5.00th=[22414], 10.00th=[22938], 20.00th=[23462], 00:14:36.619 | 30.00th=[23725], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:14:36.619 | 70.00th=[25560], 80.00th=[26870], 90.00th=[29754], 95.00th=[31327], 00:14:36.619 | 99.00th=[35914], 99.50th=[35914], 99.90th=[38011], 99.95th=[39060], 00:14:36.619 | 99.99th=[39060] 00:14:36.619 write: IOPS=2725, BW=10.6MiB/s (11.2MB/s)(10.7MiB/1001msec); 0 zone resets 00:14:36.619 slat (usec): min=10, max=6124, avg=177.36, stdev=712.10 00:14:36.619 clat (usec): min=179, max=27556, avg=22419.82, stdev=3174.87 00:14:36.619 lat (usec): min=3539, max=27583, avg=22597.18, stdev=3116.30 00:14:36.619 clat percentiles (usec): 00:14:36.619 | 1.00th=[ 4228], 5.00th=[18220], 10.00th=[19792], 20.00th=[21627], 00:14:36.619 | 30.00th=[22414], 40.00th=[22414], 50.00th=[22938], 60.00th=[23462], 00:14:36.619 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24773], 95.00th=[25822], 00:14:36.619 | 99.00th=[26870], 99.50th=[27132], 99.90th=[27132], 99.95th=[27657], 00:14:36.619 | 99.99th=[27657] 00:14:36.619 bw ( KiB/s): min=12288, max=12288, per=18.53%, avg=12288.00, stdev= 0.00, samples=1 00:14:36.619 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:36.619 lat (usec) : 250=0.02% 00:14:36.619 lat (msec) : 4=0.30%, 10=0.89%, 20=4.39%, 50=94.40% 00:14:36.619 cpu : usr=2.30%, sys=7.80%, ctx=787, majf=0, minf=13 00:14:36.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:36.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:36.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:36.619 issued rwts: total=2560,2728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:36.619 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:36.619 job2: (groupid=0, jobs=1): err= 0: pid=66751: Mon Oct 7 11:25:31 2024 00:14:36.619 read: IOPS=5073, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1003msec) 00:14:36.619 slat (usec): min=5, max=4664, avg=95.27, stdev=448.13 00:14:36.619 clat (usec): min=215, max=15510, avg=12592.34, stdev=1139.92 00:14:36.619 lat (usec): min=2955, max=15611, avg=12687.61, stdev=1049.81 00:14:36.619 clat percentiles (usec): 00:14:36.619 | 1.00th=[ 6587], 5.00th=[11600], 10.00th=[12125], 20.00th=[12256], 00:14:36.619 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:14:36.619 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13304], 95.00th=[13566], 00:14:36.619 | 99.00th=[14484], 99.50th=[14484], 99.90th=[14877], 99.95th=[15533], 00:14:36.619 | 99.99th=[15533] 00:14:36.619 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:14:36.619 slat (usec): min=10, max=2833, avg=93.18, stdev=394.38 00:14:36.619 clat (usec): min=8995, max=13148, avg=12220.97, stdev=509.93 00:14:36.619 lat (usec): min=10165, max=13168, avg=12314.15, stdev=324.40 00:14:36.619 clat percentiles (usec): 00:14:36.619 | 1.00th=[ 9896], 5.00th=[11600], 10.00th=[11731], 20.00th=[11994], 00:14:36.619 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12256], 60.00th=[12387], 00:14:36.619 | 70.00th=[12518], 80.00th=[12518], 90.00th=[12649], 95.00th=[12780], 00:14:36.620 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13173], 99.95th=[13173], 00:14:36.620 | 99.99th=[13173] 00:14:36.620 bw ( KiB/s): min=20480, max=20480, per=30.89%, avg=20480.00, stdev= 0.00, samples=2 00:14:36.620 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:14:36.620 lat (usec) : 250=0.01% 00:14:36.620 lat (msec) : 4=0.31%, 10=1.77%, 20=97.90% 00:14:36.620 cpu : usr=3.79%, sys=15.07%, ctx=369, majf=0, minf=13 00:14:36.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:36.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:36.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:36.620 issued rwts: total=5089,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:36.620 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:36.620 job3: (groupid=0, jobs=1): err= 0: pid=66752: Mon Oct 7 11:25:31 2024 00:14:36.620 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:14:36.620 slat (usec): min=7, max=6995, avg=188.41, stdev=676.33 00:14:36.620 clat (usec): min=17393, max=32531, avg=23967.11, stdev=2298.75 00:14:36.620 lat (usec): min=17414, max=32820, avg=24155.52, stdev=2239.41 00:14:36.620 clat percentiles (usec): 00:14:36.620 | 1.00th=[18220], 5.00th=[19792], 10.00th=[21103], 20.00th=[22676], 00:14:36.620 | 30.00th=[23200], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:14:36.620 | 70.00th=[24511], 80.00th=[25035], 90.00th=[27132], 95.00th=[27919], 00:14:36.620 | 99.00th=[30540], 99.50th=[31327], 99.90th=[32113], 99.95th=[32113], 00:14:36.620 | 99.99th=[32637] 00:14:36.620 write: IOPS=2976, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1003msec); 0 zone resets 00:14:36.620 slat (usec): min=9, max=6039, avg=166.05, stdev=687.87 00:14:36.620 clat (usec): min=2844, max=29557, avg=21776.60, stdev=3745.19 00:14:36.620 lat (usec): min=2864, max=29757, avg=21942.65, stdev=3714.24 00:14:36.620 clat percentiles (usec): 00:14:36.620 | 1.00th=[ 8455], 5.00th=[15270], 10.00th=[16450], 20.00th=[19006], 00:14:36.620 | 30.00th=[20841], 40.00th=[22414], 50.00th=[22676], 60.00th=[23200], 00:14:36.620 | 70.00th=[23462], 80.00th=[24249], 90.00th=[25297], 95.00th=[26870], 00:14:36.620 | 99.00th=[28443], 99.50th=[28967], 99.90th=[29492], 99.95th=[29492], 00:14:36.620 | 99.99th=[29492] 00:14:36.620 bw ( KiB/s): min=10576, max=12288, per=17.24%, avg=11432.00, stdev=1210.57, samples=2 00:14:36.620 iops : min= 2644, max= 3072, avg=2858.00, stdev=302.64, samples=2 00:14:36.620 lat (msec) : 4=0.40%, 10=0.58%, 20=14.84%, 50=84.18% 00:14:36.620 cpu : usr=2.50%, sys=8.48%, ctx=750, majf=0, minf=9 00:14:36.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:14:36.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:36.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:36.620 issued rwts: total=2560,2985,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:36.620 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:36.620 00:14:36.620 Run status group 0 (all jobs): 00:14:36.620 READ: bw=61.7MiB/s (64.7MB/s), 9.97MiB/s-22.0MiB/s (10.5MB/s-23.0MB/s), io=61.9MiB (64.9MB), run=1001-1003msec 00:14:36.620 WRITE: bw=64.8MiB/s (67.9MB/s), 10.6MiB/s-22.6MiB/s (11.2MB/s-23.7MB/s), io=64.9MiB (68.1MB), run=1001-1003msec 00:14:36.620 00:14:36.620 Disk stats (read/write): 00:14:36.620 nvme0n1: ios=4850/5120, merge=0/0, ticks=12372/11678, in_queue=24050, util=89.87% 00:14:36.620 nvme0n2: ios=2097/2537, merge=0/0, ticks=12219/12485, in_queue=24704, util=88.89% 00:14:36.620 nvme0n3: ios=4256/4608, merge=0/0, ticks=12053/11876, in_queue=23929, util=89.34% 00:14:36.620 nvme0n4: ios=2240/2560, merge=0/0, ticks=13050/12603, in_queue=25653, util=89.80% 00:14:36.620 11:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:36.620 [global] 00:14:36.620 thread=1 00:14:36.620 invalidate=1 00:14:36.620 rw=randwrite 00:14:36.620 time_based=1 00:14:36.620 runtime=1 00:14:36.620 ioengine=libaio 00:14:36.620 direct=1 00:14:36.620 bs=4096 00:14:36.620 iodepth=128 00:14:36.620 norandommap=0 00:14:36.620 numjobs=1 00:14:36.620 00:14:36.620 verify_dump=1 00:14:36.620 verify_backlog=512 00:14:36.620 verify_state_save=0 00:14:36.620 do_verify=1 00:14:36.620 verify=crc32c-intel 00:14:36.620 [job0] 00:14:36.620 filename=/dev/nvme0n1 00:14:36.620 [job1] 00:14:36.620 filename=/dev/nvme0n2 00:14:36.620 [job2] 00:14:36.620 filename=/dev/nvme0n3 00:14:36.620 [job3] 00:14:36.620 filename=/dev/nvme0n4 00:14:36.620 Could not set queue depth (nvme0n1) 00:14:36.620 Could not set queue depth (nvme0n2) 00:14:36.620 Could not set queue depth (nvme0n3) 00:14:36.620 Could not set queue depth (nvme0n4) 00:14:36.620 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:36.620 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:36.620 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:36.620 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:36.620 fio-3.35 00:14:36.620 Starting 4 threads 00:14:38.026 00:14:38.026 job0: (groupid=0, jobs=1): err= 0: pid=66805: Mon Oct 7 11:25:33 2024 00:14:38.026 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:14:38.026 slat (usec): min=6, max=16276, avg=165.18, stdev=1264.78 00:14:38.026 clat (usec): min=14098, max=36902, avg=21885.73, stdev=2440.37 00:14:38.026 lat (usec): min=14110, max=42873, avg=22050.91, stdev=2661.42 00:14:38.026 clat percentiles (usec): 00:14:38.026 | 1.00th=[16909], 5.00th=[19006], 10.00th=[20055], 20.00th=[20317], 00:14:38.026 | 30.00th=[20579], 40.00th=[20841], 50.00th=[21365], 60.00th=[22152], 00:14:38.026 | 70.00th=[22414], 80.00th=[22938], 90.00th=[25297], 95.00th=[27132], 00:14:38.026 | 99.00th=[27395], 99.50th=[31851], 99.90th=[35914], 99.95th=[36439], 00:14:38.026 | 99.99th=[36963] 00:14:38.026 write: IOPS=3189, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1003msec); 0 zone resets 00:14:38.026 slat (usec): min=12, max=12266, avg=146.48, stdev=971.44 00:14:38.026 clat (usec): min=894, max=27299, avg=18707.06, stdev=2986.77 00:14:38.026 lat (usec): min=8681, max=27344, avg=18853.54, stdev=2851.66 00:14:38.026 clat percentiles (usec): 00:14:38.026 | 1.00th=[ 8979], 5.00th=[10421], 10.00th=[15008], 20.00th=[17695], 00:14:38.026 | 30.00th=[18482], 40.00th=[19268], 50.00th=[19268], 60.00th=[19792], 00:14:38.026 | 70.00th=[20055], 80.00th=[20841], 90.00th=[21365], 95.00th=[21627], 00:14:38.026 | 99.00th=[23725], 99.50th=[23725], 99.90th=[25560], 99.95th=[27132], 00:14:38.026 | 99.99th=[27395] 00:14:38.026 bw ( KiB/s): min=11776, max=12856, per=24.28%, avg=12316.00, stdev=763.68, samples=2 00:14:38.026 iops : min= 2944, max= 3214, avg=3079.00, stdev=190.92, samples=2 00:14:38.026 lat (usec) : 1000=0.02% 00:14:38.026 lat (msec) : 10=2.25%, 20=38.14%, 50=59.59% 00:14:38.026 cpu : usr=2.99%, sys=8.48%, ctx=135, majf=0, minf=2 00:14:38.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:14:38.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:38.026 issued rwts: total=3072,3199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:38.026 job1: (groupid=0, jobs=1): err= 0: pid=66806: Mon Oct 7 11:25:33 2024 00:14:38.026 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:14:38.026 slat (usec): min=3, max=14127, avg=163.80, stdev=1127.76 00:14:38.026 clat (usec): min=7929, max=39205, avg=21633.70, stdev=3577.87 00:14:38.026 lat (usec): min=7937, max=42174, avg=21797.50, stdev=3684.73 00:14:38.026 clat percentiles (usec): 00:14:38.026 | 1.00th=[10159], 5.00th=[18220], 10.00th=[19006], 20.00th=[20055], 00:14:38.026 | 30.00th=[20317], 40.00th=[20579], 50.00th=[21103], 60.00th=[21627], 00:14:38.026 | 70.00th=[22152], 80.00th=[22676], 90.00th=[25560], 95.00th=[27395], 00:14:38.026 | 99.00th=[35390], 99.50th=[36963], 99.90th=[39060], 99.95th=[39060], 00:14:38.026 | 99.99th=[39060] 00:14:38.026 write: IOPS=3215, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1003msec); 0 zone resets 00:14:38.026 slat (usec): min=5, max=15765, avg=146.03, stdev=931.08 00:14:38.026 clat (usec): min=2910, max=39246, avg=18828.44, stdev=3560.59 00:14:38.026 lat (usec): min=2943, max=39257, avg=18974.47, stdev=3482.50 00:14:38.026 clat percentiles (usec): 00:14:38.026 | 1.00th=[ 5342], 5.00th=[11207], 10.00th=[14615], 20.00th=[17433], 00:14:38.026 | 30.00th=[18744], 40.00th=[19268], 50.00th=[19268], 60.00th=[20055], 00:14:38.026 | 70.00th=[20579], 80.00th=[21365], 90.00th=[21890], 95.00th=[21890], 00:14:38.026 | 99.00th=[26346], 99.50th=[26608], 99.90th=[27132], 99.95th=[28443], 00:14:38.026 | 99.99th=[39060] 00:14:38.026 bw ( KiB/s): min=12000, max=12848, per=24.50%, avg=12424.00, stdev=599.63, samples=2 00:14:38.026 iops : min= 3000, max= 3212, avg=3106.00, stdev=149.91, samples=2 00:14:38.027 lat (msec) : 4=0.44%, 10=1.46%, 20=37.70%, 50=60.39% 00:14:38.027 cpu : usr=3.19%, sys=8.28%, ctx=185, majf=0, minf=3 00:14:38.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:14:38.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:38.027 issued rwts: total=3072,3225,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:38.027 job2: (groupid=0, jobs=1): err= 0: pid=66807: Mon Oct 7 11:25:33 2024 00:14:38.027 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:14:38.027 slat (usec): min=7, max=11359, avg=148.37, stdev=955.34 00:14:38.027 clat (usec): min=12581, max=39706, avg=21236.30, stdev=2766.18 00:14:38.027 lat (usec): min=12605, max=44768, avg=21384.67, stdev=2727.52 00:14:38.027 clat percentiles (usec): 00:14:38.027 | 1.00th=[13173], 5.00th=[15795], 10.00th=[19792], 20.00th=[20317], 00:14:38.027 | 30.00th=[20579], 40.00th=[20579], 50.00th=[21365], 60.00th=[21890], 00:14:38.027 | 70.00th=[22152], 80.00th=[22414], 90.00th=[22676], 95.00th=[23987], 00:14:38.027 | 99.00th=[35390], 99.50th=[36439], 99.90th=[39584], 99.95th=[39584], 00:14:38.027 | 99.99th=[39584] 00:14:38.027 write: IOPS=3138, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1004msec); 0 zone resets 00:14:38.027 slat (usec): min=9, max=19816, avg=164.63, stdev=1072.65 00:14:38.027 clat (usec): min=1730, max=30768, avg=19649.40, stdev=2761.86 00:14:38.027 lat (usec): min=8623, max=30785, avg=19814.03, stdev=2601.53 00:14:38.027 clat percentiles (usec): 00:14:38.027 | 1.00th=[ 9503], 5.00th=[16909], 10.00th=[17695], 20.00th=[18482], 00:14:38.027 | 30.00th=[19268], 40.00th=[19530], 50.00th=[19530], 60.00th=[19792], 00:14:38.027 | 70.00th=[20317], 80.00th=[20841], 90.00th=[21627], 95.00th=[22414], 00:14:38.027 | 99.00th=[30802], 99.50th=[30802], 99.90th=[30802], 99.95th=[30802], 00:14:38.027 | 99.99th=[30802] 00:14:38.027 bw ( KiB/s): min=12232, max=12344, per=24.23%, avg=12288.00, stdev=79.20, samples=2 00:14:38.027 iops : min= 3058, max= 3086, avg=3072.00, stdev=19.80, samples=2 00:14:38.027 lat (msec) : 2=0.02%, 10=0.77%, 20=39.29%, 50=59.92% 00:14:38.027 cpu : usr=3.69%, sys=7.98%, ctx=173, majf=0, minf=1 00:14:38.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:14:38.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:38.027 issued rwts: total=3072,3151,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:38.027 job3: (groupid=0, jobs=1): err= 0: pid=66808: Mon Oct 7 11:25:33 2024 00:14:38.027 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:14:38.027 slat (usec): min=6, max=15049, avg=155.98, stdev=1000.64 00:14:38.027 clat (usec): min=7527, max=42272, avg=21591.24, stdev=4260.35 00:14:38.027 lat (usec): min=7539, max=44256, avg=21747.22, stdev=4257.62 00:14:38.027 clat percentiles (usec): 00:14:38.027 | 1.00th=[12911], 5.00th=[15008], 10.00th=[19006], 20.00th=[20055], 00:14:38.027 | 30.00th=[20317], 40.00th=[20579], 50.00th=[21103], 60.00th=[21627], 00:14:38.027 | 70.00th=[22152], 80.00th=[22414], 90.00th=[24511], 95.00th=[29754], 00:14:38.027 | 99.00th=[39060], 99.50th=[40109], 99.90th=[42206], 99.95th=[42206], 00:14:38.027 | 99.99th=[42206] 00:14:38.027 write: IOPS=3142, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1004msec); 0 zone resets 00:14:38.027 slat (usec): min=6, max=16422, avg=156.28, stdev=959.56 00:14:38.027 clat (usec): min=3523, max=42229, avg=19318.88, stdev=3362.83 00:14:38.027 lat (usec): min=3546, max=42242, avg=19475.16, stdev=3270.14 00:14:38.027 clat percentiles (usec): 00:14:38.027 | 1.00th=[ 6063], 5.00th=[11863], 10.00th=[17171], 20.00th=[18220], 00:14:38.027 | 30.00th=[18744], 40.00th=[19268], 50.00th=[19530], 60.00th=[20317], 00:14:38.027 | 70.00th=[20579], 80.00th=[21365], 90.00th=[21890], 95.00th=[22152], 00:14:38.027 | 99.00th=[27395], 99.50th=[27657], 99.90th=[28443], 99.95th=[42206], 00:14:38.027 | 99.99th=[42206] 00:14:38.027 bw ( KiB/s): min=12232, max=12344, per=24.23%, avg=12288.00, stdev=79.20, samples=2 00:14:38.027 iops : min= 3058, max= 3086, avg=3072.00, stdev=19.80, samples=2 00:14:38.027 lat (msec) : 4=0.06%, 10=1.57%, 20=37.32%, 50=61.04% 00:14:38.027 cpu : usr=1.79%, sys=10.27%, ctx=188, majf=0, minf=3 00:14:38.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:14:38.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:38.027 issued rwts: total=3072,3155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:38.027 00:14:38.027 Run status group 0 (all jobs): 00:14:38.027 READ: bw=47.8MiB/s (50.1MB/s), 12.0MiB/s-12.0MiB/s (12.5MB/s-12.5MB/s), io=48.0MiB (50.3MB), run=1003-1004msec 00:14:38.027 WRITE: bw=49.5MiB/s (51.9MB/s), 12.3MiB/s-12.6MiB/s (12.9MB/s-13.2MB/s), io=49.7MiB (52.1MB), run=1003-1004msec 00:14:38.027 00:14:38.027 Disk stats (read/write): 00:14:38.027 nvme0n1: ios=2610/2688, merge=0/0, ticks=54453/47918, in_queue=102371, util=87.58% 00:14:38.027 nvme0n2: ios=2585/2687, merge=0/0, ticks=53427/48796, in_queue=102223, util=86.69% 00:14:38.027 nvme0n3: ios=2552/2632, merge=0/0, ticks=52251/49678, in_queue=101929, util=89.04% 00:14:38.027 nvme0n4: ios=2551/2632, merge=0/0, ticks=53320/48945, in_queue=102265, util=89.71% 00:14:38.027 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:38.027 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66821 00:14:38.027 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:38.027 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:38.027 [global] 00:14:38.027 thread=1 00:14:38.027 invalidate=1 00:14:38.027 rw=read 00:14:38.027 time_based=1 00:14:38.027 runtime=10 00:14:38.027 ioengine=libaio 00:14:38.027 direct=1 00:14:38.027 bs=4096 00:14:38.027 iodepth=1 00:14:38.027 norandommap=1 00:14:38.027 numjobs=1 00:14:38.027 00:14:38.027 [job0] 00:14:38.027 filename=/dev/nvme0n1 00:14:38.027 [job1] 00:14:38.027 filename=/dev/nvme0n2 00:14:38.027 [job2] 00:14:38.027 filename=/dev/nvme0n3 00:14:38.027 [job3] 00:14:38.027 filename=/dev/nvme0n4 00:14:38.027 Could not set queue depth (nvme0n1) 00:14:38.027 Could not set queue depth (nvme0n2) 00:14:38.027 Could not set queue depth (nvme0n3) 00:14:38.027 Could not set queue depth (nvme0n4) 00:14:38.027 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:38.027 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:38.027 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:38.027 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:38.027 fio-3.35 00:14:38.027 Starting 4 threads 00:14:41.333 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:41.333 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=47144960, buflen=4096 00:14:41.333 fio: pid=66866, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:41.333 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:41.592 fio: pid=66864, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:41.592 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=61517824, buflen=4096 00:14:41.592 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:41.592 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:41.850 fio: pid=66861, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:41.850 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=64864256, buflen=4096 00:14:41.850 11:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:41.850 11:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:42.109 fio: pid=66862, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:42.109 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=64163840, buflen=4096 00:14:42.109 00:14:42.109 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66861: Mon Oct 7 11:25:37 2024 00:14:42.109 read: IOPS=4431, BW=17.3MiB/s (18.1MB/s)(61.9MiB/3574msec) 00:14:42.109 slat (usec): min=8, max=12601, avg=15.43, stdev=171.82 00:14:42.109 clat (usec): min=108, max=7359, avg=209.16, stdev=96.84 00:14:42.109 lat (usec): min=144, max=12826, avg=224.58, stdev=197.60 00:14:42.109 clat percentiles (usec): 00:14:42.109 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 163], 00:14:42.109 | 30.00th=[ 172], 40.00th=[ 204], 50.00th=[ 225], 60.00th=[ 229], 00:14:42.109 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 255], 00:14:42.109 | 99.00th=[ 281], 99.50th=[ 302], 99.90th=[ 668], 99.95th=[ 1975], 00:14:42.109 | 99.99th=[ 3851] 00:14:42.109 bw ( KiB/s): min=15416, max=22400, per=30.16%, avg=18020.00, stdev=3249.35, samples=6 00:14:42.109 iops : min= 3854, max= 5600, avg=4505.00, stdev=812.34, samples=6 00:14:42.109 lat (usec) : 250=91.65%, 500=8.16%, 750=0.09%, 1000=0.02% 00:14:42.109 lat (msec) : 2=0.03%, 4=0.04%, 10=0.01% 00:14:42.109 cpu : usr=1.01%, sys=5.32%, ctx=15852, majf=0, minf=1 00:14:42.109 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:42.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.109 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.109 issued rwts: total=15837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.109 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:42.109 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66862: Mon Oct 7 11:25:37 2024 00:14:42.109 read: IOPS=4032, BW=15.8MiB/s (16.5MB/s)(61.2MiB/3885msec) 00:14:42.109 slat (usec): min=7, max=15502, avg=17.06, stdev=236.82 00:14:42.109 clat (usec): min=126, max=3563, avg=229.60, stdev=64.04 00:14:42.109 lat (usec): min=138, max=15700, avg=246.66, stdev=244.91 00:14:42.109 clat percentiles (usec): 00:14:42.109 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 153], 20.00th=[ 219], 00:14:42.109 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 241], 00:14:42.109 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 269], 00:14:42.109 | 99.00th=[ 297], 99.50th=[ 379], 99.90th=[ 783], 99.95th=[ 1106], 00:14:42.109 | 99.99th=[ 2966] 00:14:42.109 bw ( KiB/s): min=14800, max=16176, per=25.99%, avg=15528.57, stdev=475.44, samples=7 00:14:42.109 iops : min= 3700, max= 4044, avg=3882.14, stdev=118.86, samples=7 00:14:42.109 lat (usec) : 250=77.74%, 500=22.00%, 750=0.14%, 1000=0.04% 00:14:42.109 lat (msec) : 2=0.05%, 4=0.02% 00:14:42.109 cpu : usr=1.18%, sys=4.84%, ctx=15677, majf=0, minf=2 00:14:42.109 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:42.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.109 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.109 issued rwts: total=15666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.109 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:42.109 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66864: Mon Oct 7 11:25:37 2024 00:14:42.109 read: IOPS=4518, BW=17.6MiB/s (18.5MB/s)(58.7MiB/3324msec) 00:14:42.109 slat (usec): min=8, max=14822, avg=15.64, stdev=136.53 00:14:42.109 clat (usec): min=129, max=3633, avg=204.31, stdev=52.43 00:14:42.109 lat (usec): min=154, max=15044, avg=219.95, stdev=146.65 00:14:42.109 clat percentiles (usec): 00:14:42.109 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:14:42.109 | 30.00th=[ 172], 40.00th=[ 180], 50.00th=[ 219], 60.00th=[ 227], 00:14:42.109 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 253], 00:14:42.109 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 461], 99.95th=[ 685], 00:14:42.109 | 99.99th=[ 2311] 00:14:42.109 bw ( KiB/s): min=15760, max=21624, per=30.13%, avg=18002.67, stdev=2737.32, samples=6 00:14:42.109 iops : min= 3940, max= 5406, avg=4500.67, stdev=684.33, samples=6 00:14:42.109 lat (usec) : 250=93.16%, 500=6.75%, 750=0.05%, 1000=0.01% 00:14:42.109 lat (msec) : 2=0.01%, 4=0.01% 00:14:42.109 cpu : usr=1.63%, sys=5.42%, ctx=15046, majf=0, minf=1 00:14:42.109 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:42.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.109 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.109 issued rwts: total=15020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.109 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:42.109 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66866: Mon Oct 7 11:25:37 2024 00:14:42.109 read: IOPS=3874, BW=15.1MiB/s (15.9MB/s)(45.0MiB/2971msec) 00:14:42.109 slat (nsec): min=7774, max=42841, avg=11823.24, stdev=3665.97 00:14:42.109 clat (usec): min=179, max=1154, avg=245.03, stdev=23.56 00:14:42.109 lat (usec): min=190, max=1167, avg=256.86, stdev=24.41 00:14:42.109 clat percentiles (usec): 00:14:42.109 | 1.00th=[ 219], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 229], 00:14:42.109 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 247], 00:14:42.110 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 277], 00:14:42.110 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 494], 99.95th=[ 603], 00:14:42.110 | 99.99th=[ 717] 00:14:42.110 bw ( KiB/s): min=14800, max=16184, per=25.86%, avg=15448.00, stdev=558.57, samples=5 00:14:42.110 iops : min= 3700, max= 4046, avg=3862.00, stdev=139.64, samples=5 00:14:42.110 lat (usec) : 250=65.75%, 500=34.14%, 750=0.09% 00:14:42.110 lat (msec) : 2=0.01% 00:14:42.110 cpu : usr=1.18%, sys=4.18%, ctx=11514, majf=0, minf=2 00:14:42.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:42.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.110 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.110 issued rwts: total=11511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:42.110 00:14:42.110 Run status group 0 (all jobs): 00:14:42.110 READ: bw=58.3MiB/s (61.2MB/s), 15.1MiB/s-17.6MiB/s (15.9MB/s-18.5MB/s), io=227MiB (238MB), run=2971-3885msec 00:14:42.110 00:14:42.110 Disk stats (read/write): 00:14:42.110 nvme0n1: ios=14929/0, merge=0/0, ticks=2997/0, in_queue=2997, util=95.05% 00:14:42.110 nvme0n2: ios=15569/0, merge=0/0, ticks=3475/0, in_queue=3475, util=95.33% 00:14:42.110 nvme0n3: ios=13931/0, merge=0/0, ticks=2868/0, in_queue=2868, util=96.21% 00:14:42.110 nvme0n4: ios=11109/0, merge=0/0, ticks=2600/0, in_queue=2600, util=96.79% 00:14:42.110 11:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:42.110 11:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:42.368 11:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:42.368 11:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:42.935 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:42.935 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:43.193 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:43.193 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:43.451 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:43.451 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:43.761 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:43.761 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66821 00:14:43.761 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:43.761 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:43.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.761 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:43.761 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:14:43.761 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:43.761 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.761 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:43.761 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.761 nvmf hotplug test: fio failed as expected 00:14:43.761 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:14:43.761 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:43.761 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:43.761 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:44.020 rmmod nvme_tcp 00:14:44.020 rmmod nvme_fabrics 00:14:44.020 rmmod nvme_keyring 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 66434 ']' 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 66434 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 66434 ']' 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 66434 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:44.020 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66434 00:14:44.278 killing process with pid 66434 00:14:44.278 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:44.278 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:44.278 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66434' 00:14:44.278 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 66434 00:14:44.278 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 66434 00:14:44.278 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:44.278 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:44.278 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:44.278 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:14:44.278 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:44.278 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:14:44.278 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:14:44.278 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:44.278 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:44.278 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:44.278 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:44.278 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:44.537 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:44.537 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:44.537 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:44.537 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:44.537 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:44.537 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:44.537 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:44.537 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:44.537 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:44.537 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:44.537 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:44.537 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.537 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:44.537 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.537 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:14:44.537 00:14:44.537 real 0m20.618s 00:14:44.537 user 1m17.843s 00:14:44.537 sys 0m9.963s 00:14:44.537 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:44.537 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.537 ************************************ 00:14:44.537 END TEST nvmf_fio_target 00:14:44.537 ************************************ 00:14:44.537 11:25:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:44.537 11:25:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:44.537 11:25:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:44.537 11:25:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:44.798 ************************************ 00:14:44.798 START TEST nvmf_bdevio 00:14:44.798 ************************************ 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:44.798 * Looking for test storage... 00:14:44.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:44.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.798 --rc genhtml_branch_coverage=1 00:14:44.798 --rc genhtml_function_coverage=1 00:14:44.798 --rc genhtml_legend=1 00:14:44.798 --rc geninfo_all_blocks=1 00:14:44.798 --rc geninfo_unexecuted_blocks=1 00:14:44.798 00:14:44.798 ' 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:44.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.798 --rc genhtml_branch_coverage=1 00:14:44.798 --rc genhtml_function_coverage=1 00:14:44.798 --rc genhtml_legend=1 00:14:44.798 --rc geninfo_all_blocks=1 00:14:44.798 --rc geninfo_unexecuted_blocks=1 00:14:44.798 00:14:44.798 ' 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:44.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.798 --rc genhtml_branch_coverage=1 00:14:44.798 --rc genhtml_function_coverage=1 00:14:44.798 --rc genhtml_legend=1 00:14:44.798 --rc geninfo_all_blocks=1 00:14:44.798 --rc geninfo_unexecuted_blocks=1 00:14:44.798 00:14:44.798 ' 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:44.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.798 --rc genhtml_branch_coverage=1 00:14:44.798 --rc genhtml_function_coverage=1 00:14:44.798 --rc genhtml_legend=1 00:14:44.798 --rc geninfo_all_blocks=1 00:14:44.798 --rc geninfo_unexecuted_blocks=1 00:14:44.798 00:14:44.798 ' 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.798 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:44.799 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # nvmf_veth_init 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:44.799 Cannot find device "nvmf_init_br" 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:44.799 Cannot find device "nvmf_init_br2" 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:44.799 Cannot find device "nvmf_tgt_br" 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:44.799 Cannot find device "nvmf_tgt_br2" 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:44.799 Cannot find device "nvmf_init_br" 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:14:44.799 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:45.058 Cannot find device "nvmf_init_br2" 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:45.058 Cannot find device "nvmf_tgt_br" 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:45.058 Cannot find device "nvmf_tgt_br2" 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:45.058 Cannot find device "nvmf_br" 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:45.058 Cannot find device "nvmf_init_if" 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:45.058 Cannot find device "nvmf_init_if2" 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:45.058 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:45.058 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:14:45.058 00:14:45.058 --- 10.0.0.3 ping statistics --- 00:14:45.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.058 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:14:45.058 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:45.058 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:45.058 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:14:45.058 00:14:45.058 --- 10.0.0.4 ping statistics --- 00:14:45.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.058 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:45.059 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:45.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:45.059 00:14:45.059 --- 10.0.0.1 ping statistics --- 00:14:45.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.059 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:45.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:14:45.317 00:14:45.317 --- 10.0.0.2 ping statistics --- 00:14:45.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.317 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # return 0 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:45.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=67193 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 67193 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 67193 ']' 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:45.317 11:25:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:45.317 [2024-10-07 11:25:40.685906] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:14:45.317 [2024-10-07 11:25:40.686013] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.317 [2024-10-07 11:25:40.826448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.576 [2024-10-07 11:25:40.936781] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.576 [2024-10-07 11:25:40.937314] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.576 [2024-10-07 11:25:40.937850] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.576 [2024-10-07 11:25:40.938340] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.576 [2024-10-07 11:25:40.938548] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.576 [2024-10-07 11:25:40.940254] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:14:45.576 [2024-10-07 11:25:40.940448] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:14:45.576 [2024-10-07 11:25:40.940452] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.576 [2024-10-07 11:25:40.940388] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:14:45.576 [2024-10-07 11:25:40.997363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:45.576 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:45.576 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:14:45.576 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:45.576 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:45.576 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:45.834 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.834 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:45.834 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.834 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:45.834 [2024-10-07 11:25:41.126721] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.834 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.834 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:45.834 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.834 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:45.834 Malloc0 00:14:45.834 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.834 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:45.834 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.834 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:45.834 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.834 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:45.834 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.834 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:45.834 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.834 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:45.835 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.835 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:45.835 [2024-10-07 11:25:41.189014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:45.835 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.835 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:45.835 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:45.835 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:14:45.835 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:14:45.835 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:14:45.835 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:14:45.835 { 00:14:45.835 "params": { 00:14:45.835 "name": "Nvme$subsystem", 00:14:45.835 "trtype": "$TEST_TRANSPORT", 00:14:45.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:45.835 "adrfam": "ipv4", 00:14:45.835 "trsvcid": "$NVMF_PORT", 00:14:45.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:45.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:45.835 "hdgst": ${hdgst:-false}, 00:14:45.835 "ddgst": ${ddgst:-false} 00:14:45.835 }, 00:14:45.835 "method": "bdev_nvme_attach_controller" 00:14:45.835 } 00:14:45.835 EOF 00:14:45.835 )") 00:14:45.835 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:14:45.835 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:14:45.835 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:14:45.835 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:14:45.835 "params": { 00:14:45.835 "name": "Nvme1", 00:14:45.835 "trtype": "tcp", 00:14:45.835 "traddr": "10.0.0.3", 00:14:45.835 "adrfam": "ipv4", 00:14:45.835 "trsvcid": "4420", 00:14:45.835 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.835 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:45.835 "hdgst": false, 00:14:45.835 "ddgst": false 00:14:45.835 }, 00:14:45.835 "method": "bdev_nvme_attach_controller" 00:14:45.835 }' 00:14:45.835 [2024-10-07 11:25:41.237894] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:14:45.835 [2024-10-07 11:25:41.237972] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67226 ] 00:14:46.093 [2024-10-07 11:25:41.373310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:46.093 [2024-10-07 11:25:41.515049] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.093 [2024-10-07 11:25:41.515208] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.093 [2024-10-07 11:25:41.515214] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.093 [2024-10-07 11:25:41.579882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:46.351 I/O targets: 00:14:46.351 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:46.351 00:14:46.351 00:14:46.351 CUnit - A unit testing framework for C - Version 2.1-3 00:14:46.351 http://cunit.sourceforge.net/ 00:14:46.351 00:14:46.351 00:14:46.351 Suite: bdevio tests on: Nvme1n1 00:14:46.351 Test: blockdev write read block ...passed 00:14:46.351 Test: blockdev write zeroes read block ...passed 00:14:46.351 Test: blockdev write zeroes read no split ...passed 00:14:46.351 Test: blockdev write zeroes read split ...passed 00:14:46.351 Test: blockdev write zeroes read split partial ...passed 00:14:46.351 Test: blockdev reset ...[2024-10-07 11:25:41.727174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:46.351 [2024-10-07 11:25:41.727278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a3040 (9): Bad file descriptor 00:14:46.351 [2024-10-07 11:25:41.745089] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:46.351 passed 00:14:46.351 Test: blockdev write read 8 blocks ...passed 00:14:46.351 Test: blockdev write read size > 128k ...passed 00:14:46.351 Test: blockdev write read invalid size ...passed 00:14:46.351 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:46.351 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:46.351 Test: blockdev write read max offset ...passed 00:14:46.351 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:46.351 Test: blockdev writev readv 8 blocks ...passed 00:14:46.351 Test: blockdev writev readv 30 x 1block ...passed 00:14:46.351 Test: blockdev writev readv block ...passed 00:14:46.351 Test: blockdev writev readv size > 128k ...passed 00:14:46.351 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:46.351 Test: blockdev comparev and writev ...[2024-10-07 11:25:41.753653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:46.351 [2024-10-07 11:25:41.753820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:46.351 [2024-10-07 11:25:41.753850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:46.352 [2024-10-07 11:25:41.753863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:46.352 [2024-10-07 11:25:41.754163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:46.352 [2024-10-07 11:25:41.754181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:46.352 [2024-10-07 11:25:41.754199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:46.352 [2024-10-07 11:25:41.754209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:46.352 [2024-10-07 11:25:41.754538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:46.352 [2024-10-07 11:25:41.754557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:46.352 [2024-10-07 11:25:41.754574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:46.352 [2024-10-07 11:25:41.754584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:46.352 [2024-10-07 11:25:41.754954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGpassed 00:14:46.352 Test: blockdev nvme passthru rw ...L DATA BLOCK OFFSET 0x0 len:0x200 00:14:46.352 [2024-10-07 11:25:41.755104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:46.352 [2024-10-07 11:25:41.755130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:46.352 [2024-10-07 11:25:41.755141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:46.352 passed 00:14:46.352 Test: blockdev nvme passthru vendor specific ...[2024-10-07 11:25:41.756327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:46.352 [2024-10-07 11:25:41.756353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:46.352 [2024-10-07 11:25:41.756475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:46.352 [2024-10-07 11:25:41.756492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:46.352 passed 00:14:46.352 Test: blockdev nvme admin passthru ...[2024-10-07 11:25:41.756599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:46.352 [2024-10-07 11:25:41.756622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:46.352 [2024-10-07 11:25:41.756734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:46.352 [2024-10-07 11:25:41.756750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:46.352 passed 00:14:46.352 Test: blockdev copy ...passed 00:14:46.352 00:14:46.352 Run Summary: Type Total Ran Passed Failed Inactive 00:14:46.352 suites 1 1 n/a 0 0 00:14:46.352 tests 23 23 23 0 0 00:14:46.352 asserts 152 152 152 0 n/a 00:14:46.352 00:14:46.352 Elapsed time = 0.144 seconds 00:14:46.610 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:46.610 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.610 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:46.610 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.610 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:46.610 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:46.610 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:46.610 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:46.611 rmmod nvme_tcp 00:14:46.611 rmmod nvme_fabrics 00:14:46.611 rmmod nvme_keyring 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 67193 ']' 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 67193 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 67193 ']' 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 67193 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67193 00:14:46.611 killing process with pid 67193 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67193' 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 67193 00:14:46.611 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 67193 00:14:46.868 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:46.868 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:46.868 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:46.868 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:14:46.868 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:14:46.868 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:46.868 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:14:46.868 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:46.868 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:46.869 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:46.869 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:46.869 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:14:47.126 ************************************ 00:14:47.126 END TEST nvmf_bdevio 00:14:47.126 ************************************ 00:14:47.126 00:14:47.126 real 0m2.516s 00:14:47.126 user 0m6.910s 00:14:47.126 sys 0m0.840s 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:47.126 00:14:47.126 ************************************ 00:14:47.126 END TEST nvmf_target_core 00:14:47.126 ************************************ 00:14:47.126 real 2m41.449s 00:14:47.126 user 7m6.784s 00:14:47.126 sys 0m52.211s 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:47.126 11:25:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:47.384 11:25:42 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:47.384 11:25:42 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:47.384 11:25:42 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:47.384 11:25:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:47.384 ************************************ 00:14:47.384 START TEST nvmf_target_extra 00:14:47.384 ************************************ 00:14:47.384 11:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:47.384 * Looking for test storage... 00:14:47.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:47.384 11:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:47.384 11:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:14:47.384 11:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:47.384 11:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:47.384 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:47.384 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:47.384 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:47.384 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:14:47.384 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:14:47.384 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:14:47.384 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:47.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.385 --rc genhtml_branch_coverage=1 00:14:47.385 --rc genhtml_function_coverage=1 00:14:47.385 --rc genhtml_legend=1 00:14:47.385 --rc geninfo_all_blocks=1 00:14:47.385 --rc geninfo_unexecuted_blocks=1 00:14:47.385 00:14:47.385 ' 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:47.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.385 --rc genhtml_branch_coverage=1 00:14:47.385 --rc genhtml_function_coverage=1 00:14:47.385 --rc genhtml_legend=1 00:14:47.385 --rc geninfo_all_blocks=1 00:14:47.385 --rc geninfo_unexecuted_blocks=1 00:14:47.385 00:14:47.385 ' 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:47.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.385 --rc genhtml_branch_coverage=1 00:14:47.385 --rc genhtml_function_coverage=1 00:14:47.385 --rc genhtml_legend=1 00:14:47.385 --rc geninfo_all_blocks=1 00:14:47.385 --rc geninfo_unexecuted_blocks=1 00:14:47.385 00:14:47.385 ' 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:47.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.385 --rc genhtml_branch_coverage=1 00:14:47.385 --rc genhtml_function_coverage=1 00:14:47.385 --rc genhtml_legend=1 00:14:47.385 --rc geninfo_all_blocks=1 00:14:47.385 --rc geninfo_unexecuted_blocks=1 00:14:47.385 00:14:47.385 ' 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:47.385 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:47.385 ************************************ 00:14:47.385 START TEST nvmf_auth_target 00:14:47.385 ************************************ 00:14:47.385 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:47.644 * Looking for test storage... 00:14:47.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:47.644 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:47.644 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:47.644 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:47.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.644 --rc genhtml_branch_coverage=1 00:14:47.644 --rc genhtml_function_coverage=1 00:14:47.644 --rc genhtml_legend=1 00:14:47.644 --rc geninfo_all_blocks=1 00:14:47.644 --rc geninfo_unexecuted_blocks=1 00:14:47.644 00:14:47.644 ' 00:14:47.644 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:47.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.645 --rc genhtml_branch_coverage=1 00:14:47.645 --rc genhtml_function_coverage=1 00:14:47.645 --rc genhtml_legend=1 00:14:47.645 --rc geninfo_all_blocks=1 00:14:47.645 --rc geninfo_unexecuted_blocks=1 00:14:47.645 00:14:47.645 ' 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:47.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.645 --rc genhtml_branch_coverage=1 00:14:47.645 --rc genhtml_function_coverage=1 00:14:47.645 --rc genhtml_legend=1 00:14:47.645 --rc geninfo_all_blocks=1 00:14:47.645 --rc geninfo_unexecuted_blocks=1 00:14:47.645 00:14:47.645 ' 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:47.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.645 --rc genhtml_branch_coverage=1 00:14:47.645 --rc genhtml_function_coverage=1 00:14:47.645 --rc genhtml_legend=1 00:14:47.645 --rc geninfo_all_blocks=1 00:14:47.645 --rc geninfo_unexecuted_blocks=1 00:14:47.645 00:14:47.645 ' 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:47.645 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:47.645 Cannot find device "nvmf_init_br" 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:14:47.645 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:47.645 Cannot find device "nvmf_init_br2" 00:14:47.646 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:14:47.646 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:47.646 Cannot find device "nvmf_tgt_br" 00:14:47.646 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:14:47.646 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:47.904 Cannot find device "nvmf_tgt_br2" 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:47.904 Cannot find device "nvmf_init_br" 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:47.904 Cannot find device "nvmf_init_br2" 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:47.904 Cannot find device "nvmf_tgt_br" 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:47.904 Cannot find device "nvmf_tgt_br2" 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:47.904 Cannot find device "nvmf_br" 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:47.904 Cannot find device "nvmf_init_if" 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:47.904 Cannot find device "nvmf_init_if2" 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:47.904 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:47.904 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:47.904 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:48.163 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:48.163 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:14:48.163 00:14:48.163 --- 10.0.0.3 ping statistics --- 00:14:48.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.163 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:48.163 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:48.163 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:14:48.163 00:14:48.163 --- 10.0.0.4 ping statistics --- 00:14:48.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.163 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:48.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:48.163 00:14:48.163 --- 10.0.0.1 ping statistics --- 00:14:48.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.163 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:48.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:14:48.163 00:14:48.163 --- 10.0.0.2 ping statistics --- 00:14:48.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.163 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # return 0 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=67510 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 67510 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67510 ']' 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:48.163 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67542 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=6744d3c674374f39e9e4f59eac5c9e56b6219438b2d3c538 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Bv0 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 6744d3c674374f39e9e4f59eac5c9e56b6219438b2d3c538 0 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 6744d3c674374f39e9e4f59eac5c9e56b6219438b2d3c538 0 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=6744d3c674374f39e9e4f59eac5c9e56b6219438b2d3c538 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Bv0 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Bv0 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Bv0 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=016fb8f20a6c8404eb595689fe4a8b3694bf2eb14b19bd84a56d5d144b4e3e28 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.ycb 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 016fb8f20a6c8404eb595689fe4a8b3694bf2eb14b19bd84a56d5d144b4e3e28 3 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 016fb8f20a6c8404eb595689fe4a8b3694bf2eb14b19bd84a56d5d144b4e3e28 3 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=016fb8f20a6c8404eb595689fe4a8b3694bf2eb14b19bd84a56d5d144b4e3e28 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.ycb 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.ycb 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.ycb 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:49.544 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=56f5f16e486dee6c8c96e822f4ef3539 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.SNU 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 56f5f16e486dee6c8c96e822f4ef3539 1 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 56f5f16e486dee6c8c96e822f4ef3539 1 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=56f5f16e486dee6c8c96e822f4ef3539 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.SNU 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.SNU 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.SNU 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=ab51567a31231d20083be48a02027feea06f88d498e693c5 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Lna 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key ab51567a31231d20083be48a02027feea06f88d498e693c5 2 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 ab51567a31231d20083be48a02027feea06f88d498e693c5 2 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=ab51567a31231d20083be48a02027feea06f88d498e693c5 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Lna 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Lna 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Lna 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=c6eaee5f288597b0459e36265a7d12b8236819bd66340e71 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.aUt 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key c6eaee5f288597b0459e36265a7d12b8236819bd66340e71 2 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 c6eaee5f288597b0459e36265a7d12b8236819bd66340e71 2 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=c6eaee5f288597b0459e36265a7d12b8236819bd66340e71 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:14:49.545 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.aUt 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.aUt 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.aUt 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=55eb3bca37868a46d01d7a62a2b6d796 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.Lk1 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 55eb3bca37868a46d01d7a62a2b6d796 1 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 55eb3bca37868a46d01d7a62a2b6d796 1 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=55eb3bca37868a46d01d7a62a2b6d796 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:14:49.545 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:49.803 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.Lk1 00:14:49.803 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.Lk1 00:14:49.803 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Lk1 00:14:49.803 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:49.803 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:49.803 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.803 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:49.803 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:14:49.803 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=acc5294c4235342f4ec9363efd14d21f03934b480d1480562a0aa3614300853f 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.Tdi 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key acc5294c4235342f4ec9363efd14d21f03934b480d1480562a0aa3614300853f 3 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 acc5294c4235342f4ec9363efd14d21f03934b480d1480562a0aa3614300853f 3 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=acc5294c4235342f4ec9363efd14d21f03934b480d1480562a0aa3614300853f 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.Tdi 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.Tdi 00:14:49.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Tdi 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67510 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67510 ']' 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:49.804 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:50.062 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.062 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:50.062 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67542 /var/tmp/host.sock 00:14:50.062 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67542 ']' 00:14:50.062 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:50.062 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:50.062 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:50.062 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:50.062 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.326 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.326 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:50.326 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:50.326 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.326 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.326 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.326 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:50.326 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Bv0 00:14:50.326 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.326 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.326 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.326 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Bv0 00:14:50.326 11:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Bv0 00:14:50.592 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.ycb ]] 00:14:50.592 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ycb 00:14:50.592 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.592 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.850 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.850 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ycb 00:14:50.850 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ycb 00:14:51.108 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:51.108 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.SNU 00:14:51.108 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.108 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.108 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.109 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.SNU 00:14:51.109 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.SNU 00:14:51.367 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Lna ]] 00:14:51.367 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Lna 00:14:51.367 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.367 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.367 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.367 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Lna 00:14:51.367 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Lna 00:14:51.625 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:51.625 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.aUt 00:14:51.625 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.625 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.625 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.625 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.aUt 00:14:51.625 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.aUt 00:14:51.885 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Lk1 ]] 00:14:51.885 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lk1 00:14:51.885 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.885 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.885 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.885 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lk1 00:14:51.885 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lk1 00:14:52.216 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:52.216 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Tdi 00:14:52.216 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.216 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.216 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.216 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Tdi 00:14:52.216 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Tdi 00:14:52.475 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:52.475 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:52.475 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:52.475 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.475 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:52.475 11:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:52.733 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:52.733 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.733 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:52.733 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:52.733 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:52.733 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.733 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.733 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.733 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.733 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.733 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.733 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.733 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.990 00:14:52.990 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.990 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.990 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.247 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.247 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.247 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.247 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.247 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.247 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.247 { 00:14:53.247 "cntlid": 1, 00:14:53.247 "qid": 0, 00:14:53.247 "state": "enabled", 00:14:53.247 "thread": "nvmf_tgt_poll_group_000", 00:14:53.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:14:53.247 "listen_address": { 00:14:53.247 "trtype": "TCP", 00:14:53.247 "adrfam": "IPv4", 00:14:53.247 "traddr": "10.0.0.3", 00:14:53.247 "trsvcid": "4420" 00:14:53.247 }, 00:14:53.247 "peer_address": { 00:14:53.247 "trtype": "TCP", 00:14:53.247 "adrfam": "IPv4", 00:14:53.247 "traddr": "10.0.0.1", 00:14:53.247 "trsvcid": "33552" 00:14:53.247 }, 00:14:53.247 "auth": { 00:14:53.247 "state": "completed", 00:14:53.247 "digest": "sha256", 00:14:53.247 "dhgroup": "null" 00:14:53.247 } 00:14:53.247 } 00:14:53.247 ]' 00:14:53.247 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.247 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.247 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.247 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:53.247 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.505 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.505 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.505 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.764 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:14:53.764 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.033 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.033 00:14:59.033 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.033 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.033 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.033 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.033 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.033 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.033 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.033 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.033 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.033 { 00:14:59.033 "cntlid": 3, 00:14:59.033 "qid": 0, 00:14:59.033 "state": "enabled", 00:14:59.033 "thread": "nvmf_tgt_poll_group_000", 00:14:59.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:14:59.033 "listen_address": { 00:14:59.033 "trtype": "TCP", 00:14:59.033 "adrfam": "IPv4", 00:14:59.033 "traddr": "10.0.0.3", 00:14:59.033 "trsvcid": "4420" 00:14:59.033 }, 00:14:59.033 "peer_address": { 00:14:59.033 "trtype": "TCP", 00:14:59.033 "adrfam": "IPv4", 00:14:59.034 "traddr": "10.0.0.1", 00:14:59.034 "trsvcid": "37442" 00:14:59.034 }, 00:14:59.034 "auth": { 00:14:59.034 "state": "completed", 00:14:59.034 "digest": "sha256", 00:14:59.034 "dhgroup": "null" 00:14:59.034 } 00:14:59.034 } 00:14:59.034 ]' 00:14:59.034 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.034 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.034 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.291 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:59.291 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.291 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.291 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.291 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.549 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:14:59.549 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:15:00.482 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.482 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:00.482 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.482 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.482 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.482 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.482 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:00.482 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:00.740 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:00.740 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.740 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:00.740 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:00.740 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:00.740 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.740 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.740 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.740 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.740 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.740 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.740 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.740 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.997 00:15:00.997 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.997 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.997 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.254 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.254 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.254 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.254 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.254 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.254 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.254 { 00:15:01.254 "cntlid": 5, 00:15:01.255 "qid": 0, 00:15:01.255 "state": "enabled", 00:15:01.255 "thread": "nvmf_tgt_poll_group_000", 00:15:01.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:01.255 "listen_address": { 00:15:01.255 "trtype": "TCP", 00:15:01.255 "adrfam": "IPv4", 00:15:01.255 "traddr": "10.0.0.3", 00:15:01.255 "trsvcid": "4420" 00:15:01.255 }, 00:15:01.255 "peer_address": { 00:15:01.255 "trtype": "TCP", 00:15:01.255 "adrfam": "IPv4", 00:15:01.255 "traddr": "10.0.0.1", 00:15:01.255 "trsvcid": "37458" 00:15:01.255 }, 00:15:01.255 "auth": { 00:15:01.255 "state": "completed", 00:15:01.255 "digest": "sha256", 00:15:01.255 "dhgroup": "null" 00:15:01.255 } 00:15:01.255 } 00:15:01.255 ]' 00:15:01.255 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.255 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.255 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.255 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:01.255 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.512 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.512 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.512 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.770 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:15:01.770 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:15:02.336 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.336 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:02.336 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.336 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.336 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.336 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.336 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:02.336 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:02.902 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:02.902 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.902 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:02.902 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:02.902 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:02.902 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.902 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:15:02.902 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.902 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.902 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.902 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:02.902 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.902 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:03.160 00:15:03.160 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.160 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.160 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.418 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.418 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.418 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.418 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.418 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.418 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.418 { 00:15:03.418 "cntlid": 7, 00:15:03.418 "qid": 0, 00:15:03.418 "state": "enabled", 00:15:03.418 "thread": "nvmf_tgt_poll_group_000", 00:15:03.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:03.418 "listen_address": { 00:15:03.418 "trtype": "TCP", 00:15:03.418 "adrfam": "IPv4", 00:15:03.418 "traddr": "10.0.0.3", 00:15:03.418 "trsvcid": "4420" 00:15:03.418 }, 00:15:03.418 "peer_address": { 00:15:03.418 "trtype": "TCP", 00:15:03.418 "adrfam": "IPv4", 00:15:03.418 "traddr": "10.0.0.1", 00:15:03.418 "trsvcid": "37484" 00:15:03.418 }, 00:15:03.418 "auth": { 00:15:03.419 "state": "completed", 00:15:03.419 "digest": "sha256", 00:15:03.419 "dhgroup": "null" 00:15:03.419 } 00:15:03.419 } 00:15:03.419 ]' 00:15:03.419 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.419 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.419 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.677 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:03.677 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.677 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.677 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.677 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.940 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:15:03.940 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:15:04.508 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.508 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:04.508 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.508 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.767 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.767 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:04.767 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.767 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:04.767 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:05.025 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:05.025 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.025 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:05.025 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:05.025 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:05.025 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.025 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.025 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.025 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.025 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.025 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.025 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.025 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.285 00:15:05.285 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.285 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.285 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.544 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.544 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.544 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.544 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.544 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.544 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.544 { 00:15:05.544 "cntlid": 9, 00:15:05.544 "qid": 0, 00:15:05.544 "state": "enabled", 00:15:05.544 "thread": "nvmf_tgt_poll_group_000", 00:15:05.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:05.544 "listen_address": { 00:15:05.544 "trtype": "TCP", 00:15:05.544 "adrfam": "IPv4", 00:15:05.544 "traddr": "10.0.0.3", 00:15:05.544 "trsvcid": "4420" 00:15:05.544 }, 00:15:05.544 "peer_address": { 00:15:05.544 "trtype": "TCP", 00:15:05.544 "adrfam": "IPv4", 00:15:05.544 "traddr": "10.0.0.1", 00:15:05.544 "trsvcid": "35438" 00:15:05.544 }, 00:15:05.544 "auth": { 00:15:05.544 "state": "completed", 00:15:05.544 "digest": "sha256", 00:15:05.544 "dhgroup": "ffdhe2048" 00:15:05.544 } 00:15:05.544 } 00:15:05.544 ]' 00:15:05.544 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.803 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.803 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.803 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:05.803 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.803 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.803 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.803 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.061 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:15:06.061 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:15:06.637 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.637 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:06.637 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.637 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.637 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.637 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.637 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:06.637 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:06.925 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:06.925 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.925 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:06.925 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:06.925 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:06.925 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.925 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.925 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.925 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.925 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.925 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.925 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.925 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.492 00:15:07.492 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.492 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.492 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.492 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.492 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.492 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.492 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.751 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.751 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.751 { 00:15:07.751 "cntlid": 11, 00:15:07.751 "qid": 0, 00:15:07.751 "state": "enabled", 00:15:07.751 "thread": "nvmf_tgt_poll_group_000", 00:15:07.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:07.751 "listen_address": { 00:15:07.751 "trtype": "TCP", 00:15:07.751 "adrfam": "IPv4", 00:15:07.751 "traddr": "10.0.0.3", 00:15:07.751 "trsvcid": "4420" 00:15:07.751 }, 00:15:07.751 "peer_address": { 00:15:07.751 "trtype": "TCP", 00:15:07.751 "adrfam": "IPv4", 00:15:07.751 "traddr": "10.0.0.1", 00:15:07.751 "trsvcid": "35460" 00:15:07.751 }, 00:15:07.751 "auth": { 00:15:07.751 "state": "completed", 00:15:07.751 "digest": "sha256", 00:15:07.751 "dhgroup": "ffdhe2048" 00:15:07.751 } 00:15:07.751 } 00:15:07.751 ]' 00:15:07.751 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.751 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.751 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.751 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:07.751 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.751 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.751 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.751 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.009 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:15:08.009 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:15:08.944 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.944 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:08.944 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.944 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.944 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.944 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.944 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:08.944 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:09.203 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:09.203 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.203 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:09.203 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:09.203 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:09.203 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.203 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.203 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.203 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.203 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.203 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.203 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.203 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.530 00:15:09.530 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.530 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.530 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.788 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.788 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.788 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.788 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.788 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.788 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.788 { 00:15:09.788 "cntlid": 13, 00:15:09.788 "qid": 0, 00:15:09.788 "state": "enabled", 00:15:09.788 "thread": "nvmf_tgt_poll_group_000", 00:15:09.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:09.788 "listen_address": { 00:15:09.788 "trtype": "TCP", 00:15:09.788 "adrfam": "IPv4", 00:15:09.788 "traddr": "10.0.0.3", 00:15:09.788 "trsvcid": "4420" 00:15:09.788 }, 00:15:09.788 "peer_address": { 00:15:09.788 "trtype": "TCP", 00:15:09.788 "adrfam": "IPv4", 00:15:09.788 "traddr": "10.0.0.1", 00:15:09.788 "trsvcid": "35486" 00:15:09.788 }, 00:15:09.788 "auth": { 00:15:09.788 "state": "completed", 00:15:09.788 "digest": "sha256", 00:15:09.788 "dhgroup": "ffdhe2048" 00:15:09.788 } 00:15:09.788 } 00:15:09.788 ]' 00:15:09.788 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.046 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.046 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.046 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:10.046 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.046 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.046 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.046 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.304 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:15:10.304 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:15:10.870 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.870 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:10.870 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.870 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.870 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.870 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.870 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:10.870 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:11.439 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:11.439 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.439 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:11.439 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:11.439 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:11.439 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.439 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:15:11.439 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.439 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.439 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.439 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:11.439 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.439 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.697 00:15:11.697 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.697 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.697 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.955 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.955 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.955 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.956 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.956 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.956 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.956 { 00:15:11.956 "cntlid": 15, 00:15:11.956 "qid": 0, 00:15:11.956 "state": "enabled", 00:15:11.956 "thread": "nvmf_tgt_poll_group_000", 00:15:11.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:11.956 "listen_address": { 00:15:11.956 "trtype": "TCP", 00:15:11.956 "adrfam": "IPv4", 00:15:11.956 "traddr": "10.0.0.3", 00:15:11.956 "trsvcid": "4420" 00:15:11.956 }, 00:15:11.956 "peer_address": { 00:15:11.956 "trtype": "TCP", 00:15:11.956 "adrfam": "IPv4", 00:15:11.956 "traddr": "10.0.0.1", 00:15:11.956 "trsvcid": "35502" 00:15:11.956 }, 00:15:11.956 "auth": { 00:15:11.956 "state": "completed", 00:15:11.956 "digest": "sha256", 00:15:11.956 "dhgroup": "ffdhe2048" 00:15:11.956 } 00:15:11.956 } 00:15:11.956 ]' 00:15:11.956 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.956 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.956 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.956 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:11.956 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.956 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.956 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.956 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.214 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:15:12.214 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.176 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.743 00:15:13.743 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.743 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.743 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.001 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.001 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.001 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.002 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.002 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.002 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.002 { 00:15:14.002 "cntlid": 17, 00:15:14.002 "qid": 0, 00:15:14.002 "state": "enabled", 00:15:14.002 "thread": "nvmf_tgt_poll_group_000", 00:15:14.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:14.002 "listen_address": { 00:15:14.002 "trtype": "TCP", 00:15:14.002 "adrfam": "IPv4", 00:15:14.002 "traddr": "10.0.0.3", 00:15:14.002 "trsvcid": "4420" 00:15:14.002 }, 00:15:14.002 "peer_address": { 00:15:14.002 "trtype": "TCP", 00:15:14.002 "adrfam": "IPv4", 00:15:14.002 "traddr": "10.0.0.1", 00:15:14.002 "trsvcid": "35534" 00:15:14.002 }, 00:15:14.002 "auth": { 00:15:14.002 "state": "completed", 00:15:14.002 "digest": "sha256", 00:15:14.002 "dhgroup": "ffdhe3072" 00:15:14.002 } 00:15:14.002 } 00:15:14.002 ]' 00:15:14.002 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.260 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.260 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.260 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:14.260 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.260 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.260 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.260 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.518 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:15:14.518 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:15:15.085 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.085 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:15.085 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.085 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.085 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.085 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.085 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:15.085 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:15.652 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:15.652 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.652 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.652 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:15.652 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:15.652 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.652 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.652 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.652 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.652 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.652 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.652 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.652 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.911 00:15:15.911 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.911 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.911 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.169 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.169 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.169 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.169 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.169 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.169 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.169 { 00:15:16.169 "cntlid": 19, 00:15:16.169 "qid": 0, 00:15:16.169 "state": "enabled", 00:15:16.169 "thread": "nvmf_tgt_poll_group_000", 00:15:16.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:16.169 "listen_address": { 00:15:16.169 "trtype": "TCP", 00:15:16.169 "adrfam": "IPv4", 00:15:16.169 "traddr": "10.0.0.3", 00:15:16.169 "trsvcid": "4420" 00:15:16.169 }, 00:15:16.169 "peer_address": { 00:15:16.169 "trtype": "TCP", 00:15:16.169 "adrfam": "IPv4", 00:15:16.169 "traddr": "10.0.0.1", 00:15:16.169 "trsvcid": "43430" 00:15:16.169 }, 00:15:16.169 "auth": { 00:15:16.169 "state": "completed", 00:15:16.169 "digest": "sha256", 00:15:16.169 "dhgroup": "ffdhe3072" 00:15:16.169 } 00:15:16.169 } 00:15:16.169 ]' 00:15:16.169 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.169 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.169 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.169 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:16.169 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.169 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.169 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.169 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.735 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:15:16.735 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:15:17.302 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.302 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:17.302 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.302 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.302 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.302 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.302 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:17.302 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:17.560 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:17.560 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.560 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:17.560 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:17.560 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:17.560 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.560 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.560 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.561 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.561 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.561 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.561 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.561 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.128 00:15:18.128 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.128 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.128 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.387 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.387 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.387 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.387 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.387 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.387 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.387 { 00:15:18.387 "cntlid": 21, 00:15:18.387 "qid": 0, 00:15:18.387 "state": "enabled", 00:15:18.387 "thread": "nvmf_tgt_poll_group_000", 00:15:18.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:18.387 "listen_address": { 00:15:18.387 "trtype": "TCP", 00:15:18.387 "adrfam": "IPv4", 00:15:18.387 "traddr": "10.0.0.3", 00:15:18.387 "trsvcid": "4420" 00:15:18.387 }, 00:15:18.387 "peer_address": { 00:15:18.387 "trtype": "TCP", 00:15:18.387 "adrfam": "IPv4", 00:15:18.387 "traddr": "10.0.0.1", 00:15:18.387 "trsvcid": "43468" 00:15:18.387 }, 00:15:18.387 "auth": { 00:15:18.387 "state": "completed", 00:15:18.387 "digest": "sha256", 00:15:18.387 "dhgroup": "ffdhe3072" 00:15:18.387 } 00:15:18.387 } 00:15:18.387 ]' 00:15:18.387 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.387 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.387 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.387 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:18.387 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.387 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.387 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.387 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.646 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:15:18.646 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:15:19.578 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.578 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:19.578 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.578 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.578 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.578 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.578 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:19.578 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:19.836 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:19.836 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.836 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.836 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:19.836 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:19.836 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.836 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:15:19.836 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.836 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.836 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.836 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:19.836 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.836 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.094 00:15:20.094 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.094 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.094 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.352 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.352 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.352 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.352 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.352 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.352 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.352 { 00:15:20.352 "cntlid": 23, 00:15:20.352 "qid": 0, 00:15:20.352 "state": "enabled", 00:15:20.352 "thread": "nvmf_tgt_poll_group_000", 00:15:20.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:20.353 "listen_address": { 00:15:20.353 "trtype": "TCP", 00:15:20.353 "adrfam": "IPv4", 00:15:20.353 "traddr": "10.0.0.3", 00:15:20.353 "trsvcid": "4420" 00:15:20.353 }, 00:15:20.353 "peer_address": { 00:15:20.353 "trtype": "TCP", 00:15:20.353 "adrfam": "IPv4", 00:15:20.353 "traddr": "10.0.0.1", 00:15:20.353 "trsvcid": "43514" 00:15:20.353 }, 00:15:20.353 "auth": { 00:15:20.353 "state": "completed", 00:15:20.353 "digest": "sha256", 00:15:20.353 "dhgroup": "ffdhe3072" 00:15:20.353 } 00:15:20.353 } 00:15:20.353 ]' 00:15:20.353 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.353 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.353 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.353 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.353 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.611 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.611 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.611 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.869 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:15:20.869 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:15:21.442 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.442 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:21.442 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.442 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.701 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.701 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:21.701 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.701 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:21.701 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:21.959 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:21.959 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.959 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.959 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:21.959 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:21.959 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.959 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.959 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.959 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.959 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.959 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.959 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.959 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.218 00:15:22.218 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.218 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.218 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.477 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.477 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.477 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.477 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.477 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.477 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.477 { 00:15:22.477 "cntlid": 25, 00:15:22.477 "qid": 0, 00:15:22.477 "state": "enabled", 00:15:22.477 "thread": "nvmf_tgt_poll_group_000", 00:15:22.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:22.477 "listen_address": { 00:15:22.477 "trtype": "TCP", 00:15:22.477 "adrfam": "IPv4", 00:15:22.477 "traddr": "10.0.0.3", 00:15:22.477 "trsvcid": "4420" 00:15:22.477 }, 00:15:22.477 "peer_address": { 00:15:22.477 "trtype": "TCP", 00:15:22.477 "adrfam": "IPv4", 00:15:22.477 "traddr": "10.0.0.1", 00:15:22.477 "trsvcid": "43542" 00:15:22.477 }, 00:15:22.477 "auth": { 00:15:22.477 "state": "completed", 00:15:22.477 "digest": "sha256", 00:15:22.477 "dhgroup": "ffdhe4096" 00:15:22.477 } 00:15:22.477 } 00:15:22.477 ]' 00:15:22.477 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.477 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.477 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.477 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:22.477 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.736 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.736 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.736 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.994 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:15:22.994 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:15:23.561 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.561 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:23.561 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.561 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.561 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.562 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.562 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:23.562 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:23.820 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:23.820 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.820 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.820 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:23.820 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:23.820 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.820 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.820 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.820 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.820 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.820 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.820 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.820 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.386 00:15:24.386 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.386 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.386 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.386 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.386 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.386 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.386 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.644 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.644 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.644 { 00:15:24.644 "cntlid": 27, 00:15:24.644 "qid": 0, 00:15:24.644 "state": "enabled", 00:15:24.644 "thread": "nvmf_tgt_poll_group_000", 00:15:24.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:24.644 "listen_address": { 00:15:24.644 "trtype": "TCP", 00:15:24.644 "adrfam": "IPv4", 00:15:24.644 "traddr": "10.0.0.3", 00:15:24.644 "trsvcid": "4420" 00:15:24.644 }, 00:15:24.644 "peer_address": { 00:15:24.644 "trtype": "TCP", 00:15:24.644 "adrfam": "IPv4", 00:15:24.644 "traddr": "10.0.0.1", 00:15:24.644 "trsvcid": "43570" 00:15:24.644 }, 00:15:24.644 "auth": { 00:15:24.644 "state": "completed", 00:15:24.644 "digest": "sha256", 00:15:24.644 "dhgroup": "ffdhe4096" 00:15:24.644 } 00:15:24.644 } 00:15:24.644 ]' 00:15:24.644 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.644 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.644 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.644 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:24.644 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.644 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.644 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.644 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.902 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:15:24.902 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.836 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.403 00:15:26.403 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.403 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.403 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.662 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.662 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.662 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.662 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.662 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.662 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.662 { 00:15:26.662 "cntlid": 29, 00:15:26.662 "qid": 0, 00:15:26.662 "state": "enabled", 00:15:26.662 "thread": "nvmf_tgt_poll_group_000", 00:15:26.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:26.662 "listen_address": { 00:15:26.662 "trtype": "TCP", 00:15:26.662 "adrfam": "IPv4", 00:15:26.662 "traddr": "10.0.0.3", 00:15:26.662 "trsvcid": "4420" 00:15:26.662 }, 00:15:26.662 "peer_address": { 00:15:26.662 "trtype": "TCP", 00:15:26.662 "adrfam": "IPv4", 00:15:26.662 "traddr": "10.0.0.1", 00:15:26.662 "trsvcid": "44418" 00:15:26.662 }, 00:15:26.662 "auth": { 00:15:26.662 "state": "completed", 00:15:26.662 "digest": "sha256", 00:15:26.662 "dhgroup": "ffdhe4096" 00:15:26.662 } 00:15:26.662 } 00:15:26.662 ]' 00:15:26.662 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.662 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.662 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.662 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.662 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.921 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.921 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.921 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.179 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:15:27.179 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:15:27.747 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.747 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:27.747 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.747 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.747 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.747 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.747 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:27.747 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:28.005 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:28.005 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.005 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.005 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:28.005 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:28.005 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.005 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:15:28.005 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.005 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.005 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.005 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:28.005 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.006 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.264 00:15:28.264 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.264 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.264 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.851 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.851 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.851 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.851 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.851 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.851 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.851 { 00:15:28.851 "cntlid": 31, 00:15:28.851 "qid": 0, 00:15:28.851 "state": "enabled", 00:15:28.851 "thread": "nvmf_tgt_poll_group_000", 00:15:28.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:28.851 "listen_address": { 00:15:28.851 "trtype": "TCP", 00:15:28.851 "adrfam": "IPv4", 00:15:28.851 "traddr": "10.0.0.3", 00:15:28.851 "trsvcid": "4420" 00:15:28.851 }, 00:15:28.851 "peer_address": { 00:15:28.851 "trtype": "TCP", 00:15:28.851 "adrfam": "IPv4", 00:15:28.851 "traddr": "10.0.0.1", 00:15:28.851 "trsvcid": "44442" 00:15:28.851 }, 00:15:28.851 "auth": { 00:15:28.851 "state": "completed", 00:15:28.851 "digest": "sha256", 00:15:28.851 "dhgroup": "ffdhe4096" 00:15:28.851 } 00:15:28.851 } 00:15:28.851 ]' 00:15:28.851 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.851 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.851 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.851 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:28.851 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.851 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.851 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.851 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.110 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:15:29.110 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:15:29.677 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.936 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:29.936 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.936 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.936 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.936 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:29.936 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.936 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:29.936 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:30.194 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:30.194 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.194 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:30.194 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:30.194 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:30.194 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.194 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.194 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.194 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.194 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.194 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.194 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.194 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.807 00:15:30.807 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.807 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.807 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.067 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.067 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.067 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.067 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.067 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.067 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.067 { 00:15:31.067 "cntlid": 33, 00:15:31.067 "qid": 0, 00:15:31.067 "state": "enabled", 00:15:31.067 "thread": "nvmf_tgt_poll_group_000", 00:15:31.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:31.067 "listen_address": { 00:15:31.067 "trtype": "TCP", 00:15:31.067 "adrfam": "IPv4", 00:15:31.067 "traddr": "10.0.0.3", 00:15:31.067 "trsvcid": "4420" 00:15:31.067 }, 00:15:31.067 "peer_address": { 00:15:31.067 "trtype": "TCP", 00:15:31.067 "adrfam": "IPv4", 00:15:31.067 "traddr": "10.0.0.1", 00:15:31.067 "trsvcid": "44470" 00:15:31.067 }, 00:15:31.067 "auth": { 00:15:31.067 "state": "completed", 00:15:31.067 "digest": "sha256", 00:15:31.067 "dhgroup": "ffdhe6144" 00:15:31.067 } 00:15:31.067 } 00:15:31.067 ]' 00:15:31.067 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.067 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.067 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.067 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:31.067 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.067 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.067 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.067 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.325 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:15:31.325 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:15:32.263 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.263 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:32.263 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.264 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.830 00:15:32.830 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.830 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.830 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.128 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.128 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.128 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.128 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.128 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.128 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.128 { 00:15:33.128 "cntlid": 35, 00:15:33.128 "qid": 0, 00:15:33.128 "state": "enabled", 00:15:33.128 "thread": "nvmf_tgt_poll_group_000", 00:15:33.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:33.128 "listen_address": { 00:15:33.128 "trtype": "TCP", 00:15:33.128 "adrfam": "IPv4", 00:15:33.128 "traddr": "10.0.0.3", 00:15:33.128 "trsvcid": "4420" 00:15:33.128 }, 00:15:33.128 "peer_address": { 00:15:33.128 "trtype": "TCP", 00:15:33.128 "adrfam": "IPv4", 00:15:33.128 "traddr": "10.0.0.1", 00:15:33.128 "trsvcid": "44496" 00:15:33.128 }, 00:15:33.128 "auth": { 00:15:33.128 "state": "completed", 00:15:33.128 "digest": "sha256", 00:15:33.128 "dhgroup": "ffdhe6144" 00:15:33.128 } 00:15:33.128 } 00:15:33.128 ]' 00:15:33.128 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.128 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.128 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.128 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:33.128 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.128 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.128 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.128 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.386 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:15:33.386 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:15:34.321 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.321 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:34.321 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.321 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.321 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.321 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.321 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:34.321 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:34.579 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:34.579 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.579 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:34.579 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:34.579 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:34.579 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.579 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.579 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.579 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.579 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.579 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.579 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.579 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.838 00:15:35.097 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.097 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.097 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.355 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.355 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.355 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.355 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.355 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.355 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.355 { 00:15:35.355 "cntlid": 37, 00:15:35.355 "qid": 0, 00:15:35.355 "state": "enabled", 00:15:35.355 "thread": "nvmf_tgt_poll_group_000", 00:15:35.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:35.355 "listen_address": { 00:15:35.355 "trtype": "TCP", 00:15:35.355 "adrfam": "IPv4", 00:15:35.355 "traddr": "10.0.0.3", 00:15:35.355 "trsvcid": "4420" 00:15:35.355 }, 00:15:35.355 "peer_address": { 00:15:35.355 "trtype": "TCP", 00:15:35.355 "adrfam": "IPv4", 00:15:35.355 "traddr": "10.0.0.1", 00:15:35.355 "trsvcid": "58048" 00:15:35.355 }, 00:15:35.355 "auth": { 00:15:35.355 "state": "completed", 00:15:35.355 "digest": "sha256", 00:15:35.355 "dhgroup": "ffdhe6144" 00:15:35.355 } 00:15:35.355 } 00:15:35.355 ]' 00:15:35.355 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.355 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.355 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.355 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:35.355 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.355 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.355 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.355 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.921 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:15:35.921 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:15:36.488 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.488 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:36.488 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.488 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.488 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.488 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.488 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:36.488 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:36.747 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:36.747 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.747 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.747 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:36.747 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:36.747 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.747 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:15:36.747 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.747 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.747 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.747 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:36.747 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:36.747 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:37.313 00:15:37.313 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.313 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.313 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.572 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.572 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.572 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.572 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.572 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.572 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.572 { 00:15:37.572 "cntlid": 39, 00:15:37.572 "qid": 0, 00:15:37.572 "state": "enabled", 00:15:37.572 "thread": "nvmf_tgt_poll_group_000", 00:15:37.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:37.572 "listen_address": { 00:15:37.572 "trtype": "TCP", 00:15:37.572 "adrfam": "IPv4", 00:15:37.572 "traddr": "10.0.0.3", 00:15:37.572 "trsvcid": "4420" 00:15:37.572 }, 00:15:37.572 "peer_address": { 00:15:37.572 "trtype": "TCP", 00:15:37.572 "adrfam": "IPv4", 00:15:37.572 "traddr": "10.0.0.1", 00:15:37.572 "trsvcid": "58074" 00:15:37.572 }, 00:15:37.572 "auth": { 00:15:37.572 "state": "completed", 00:15:37.572 "digest": "sha256", 00:15:37.572 "dhgroup": "ffdhe6144" 00:15:37.572 } 00:15:37.572 } 00:15:37.572 ]' 00:15:37.572 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.572 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.572 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.572 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:37.572 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.831 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.831 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.831 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.088 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:15:38.088 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:15:38.654 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.654 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:38.654 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.654 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.654 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.654 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:38.654 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.654 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:38.654 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:38.912 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:38.912 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.912 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.912 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:38.912 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:38.912 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.912 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.912 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.912 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.912 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.912 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.912 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.912 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.480 00:15:39.480 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.480 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.480 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.739 { 00:15:39.739 "cntlid": 41, 00:15:39.739 "qid": 0, 00:15:39.739 "state": "enabled", 00:15:39.739 "thread": "nvmf_tgt_poll_group_000", 00:15:39.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:39.739 "listen_address": { 00:15:39.739 "trtype": "TCP", 00:15:39.739 "adrfam": "IPv4", 00:15:39.739 "traddr": "10.0.0.3", 00:15:39.739 "trsvcid": "4420" 00:15:39.739 }, 00:15:39.739 "peer_address": { 00:15:39.739 "trtype": "TCP", 00:15:39.739 "adrfam": "IPv4", 00:15:39.739 "traddr": "10.0.0.1", 00:15:39.739 "trsvcid": "58116" 00:15:39.739 }, 00:15:39.739 "auth": { 00:15:39.739 "state": "completed", 00:15:39.739 "digest": "sha256", 00:15:39.739 "dhgroup": "ffdhe8192" 00:15:39.739 } 00:15:39.739 } 00:15:39.739 ]' 00:15:39.739 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.997 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.997 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.997 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:39.997 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.997 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.997 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.997 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.256 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:15:40.256 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:15:40.823 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.823 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:40.823 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.823 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.823 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.823 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.823 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:40.823 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:41.390 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:41.390 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.390 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:41.390 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:41.390 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:41.390 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.390 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.390 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.390 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.390 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.390 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.390 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.390 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.958 00:15:41.958 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.958 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.958 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.216 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.216 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.216 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.216 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.216 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.216 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.216 { 00:15:42.216 "cntlid": 43, 00:15:42.216 "qid": 0, 00:15:42.216 "state": "enabled", 00:15:42.216 "thread": "nvmf_tgt_poll_group_000", 00:15:42.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:42.216 "listen_address": { 00:15:42.216 "trtype": "TCP", 00:15:42.216 "adrfam": "IPv4", 00:15:42.216 "traddr": "10.0.0.3", 00:15:42.216 "trsvcid": "4420" 00:15:42.216 }, 00:15:42.216 "peer_address": { 00:15:42.216 "trtype": "TCP", 00:15:42.216 "adrfam": "IPv4", 00:15:42.216 "traddr": "10.0.0.1", 00:15:42.216 "trsvcid": "58146" 00:15:42.216 }, 00:15:42.216 "auth": { 00:15:42.216 "state": "completed", 00:15:42.216 "digest": "sha256", 00:15:42.216 "dhgroup": "ffdhe8192" 00:15:42.216 } 00:15:42.216 } 00:15:42.216 ]' 00:15:42.216 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.216 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.216 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.216 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:42.216 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.475 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.475 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.475 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.734 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:15:42.734 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:15:43.301 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.301 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:43.301 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.301 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.301 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.301 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.301 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:43.301 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:43.559 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:43.559 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.559 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.559 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:43.559 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:43.559 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.559 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.559 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.559 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.559 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.559 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.559 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.559 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.526 00:15:44.526 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.526 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.526 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.794 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.794 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.794 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.794 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.794 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.794 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.794 { 00:15:44.794 "cntlid": 45, 00:15:44.794 "qid": 0, 00:15:44.794 "state": "enabled", 00:15:44.794 "thread": "nvmf_tgt_poll_group_000", 00:15:44.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:44.794 "listen_address": { 00:15:44.794 "trtype": "TCP", 00:15:44.794 "adrfam": "IPv4", 00:15:44.794 "traddr": "10.0.0.3", 00:15:44.794 "trsvcid": "4420" 00:15:44.794 }, 00:15:44.794 "peer_address": { 00:15:44.794 "trtype": "TCP", 00:15:44.794 "adrfam": "IPv4", 00:15:44.794 "traddr": "10.0.0.1", 00:15:44.794 "trsvcid": "58170" 00:15:44.794 }, 00:15:44.794 "auth": { 00:15:44.794 "state": "completed", 00:15:44.794 "digest": "sha256", 00:15:44.794 "dhgroup": "ffdhe8192" 00:15:44.794 } 00:15:44.794 } 00:15:44.794 ]' 00:15:44.794 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.794 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.794 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.794 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:44.794 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.794 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.794 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.794 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.360 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:15:45.360 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:15:45.928 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.928 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:45.928 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.928 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.928 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.928 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.928 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:45.928 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:46.186 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:46.186 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.186 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:46.186 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:46.186 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:46.186 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.186 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:15:46.186 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.186 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.186 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.186 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:46.186 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.186 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.122 00:15:47.122 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.122 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.122 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.380 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.380 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.380 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.380 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.380 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.380 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.380 { 00:15:47.380 "cntlid": 47, 00:15:47.380 "qid": 0, 00:15:47.380 "state": "enabled", 00:15:47.380 "thread": "nvmf_tgt_poll_group_000", 00:15:47.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:47.380 "listen_address": { 00:15:47.380 "trtype": "TCP", 00:15:47.380 "adrfam": "IPv4", 00:15:47.380 "traddr": "10.0.0.3", 00:15:47.380 "trsvcid": "4420" 00:15:47.380 }, 00:15:47.380 "peer_address": { 00:15:47.380 "trtype": "TCP", 00:15:47.380 "adrfam": "IPv4", 00:15:47.380 "traddr": "10.0.0.1", 00:15:47.380 "trsvcid": "52166" 00:15:47.380 }, 00:15:47.380 "auth": { 00:15:47.380 "state": "completed", 00:15:47.380 "digest": "sha256", 00:15:47.380 "dhgroup": "ffdhe8192" 00:15:47.380 } 00:15:47.380 } 00:15:47.380 ]' 00:15:47.380 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.380 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.380 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.380 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:47.380 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.380 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.380 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.380 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.947 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:15:47.947 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:15:48.515 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.515 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:48.515 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.515 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.515 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.515 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:48.515 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.515 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.515 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:48.515 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:48.773 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:48.773 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.773 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.773 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:48.773 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:48.773 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.773 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.773 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.773 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.773 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.773 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.773 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.774 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.032 00:15:49.032 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.032 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.032 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.290 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.290 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.290 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.290 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.290 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.290 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.290 { 00:15:49.290 "cntlid": 49, 00:15:49.290 "qid": 0, 00:15:49.290 "state": "enabled", 00:15:49.290 "thread": "nvmf_tgt_poll_group_000", 00:15:49.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:49.290 "listen_address": { 00:15:49.290 "trtype": "TCP", 00:15:49.290 "adrfam": "IPv4", 00:15:49.290 "traddr": "10.0.0.3", 00:15:49.290 "trsvcid": "4420" 00:15:49.290 }, 00:15:49.290 "peer_address": { 00:15:49.290 "trtype": "TCP", 00:15:49.290 "adrfam": "IPv4", 00:15:49.290 "traddr": "10.0.0.1", 00:15:49.290 "trsvcid": "52194" 00:15:49.290 }, 00:15:49.290 "auth": { 00:15:49.290 "state": "completed", 00:15:49.290 "digest": "sha384", 00:15:49.290 "dhgroup": "null" 00:15:49.290 } 00:15:49.290 } 00:15:49.290 ]' 00:15:49.290 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.549 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.549 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.549 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:49.549 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.549 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.549 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.549 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.807 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:15:49.807 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:15:50.373 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.373 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:50.373 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.373 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.373 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.373 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.373 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:50.373 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:50.940 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:50.940 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.940 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:50.940 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:50.940 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:50.940 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.940 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.940 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.940 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.940 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.940 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.940 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.940 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.197 00:15:51.197 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.197 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.197 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.455 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.455 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.455 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.455 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.455 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.455 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.455 { 00:15:51.455 "cntlid": 51, 00:15:51.455 "qid": 0, 00:15:51.455 "state": "enabled", 00:15:51.455 "thread": "nvmf_tgt_poll_group_000", 00:15:51.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:51.455 "listen_address": { 00:15:51.455 "trtype": "TCP", 00:15:51.455 "adrfam": "IPv4", 00:15:51.455 "traddr": "10.0.0.3", 00:15:51.455 "trsvcid": "4420" 00:15:51.455 }, 00:15:51.455 "peer_address": { 00:15:51.455 "trtype": "TCP", 00:15:51.455 "adrfam": "IPv4", 00:15:51.455 "traddr": "10.0.0.1", 00:15:51.455 "trsvcid": "52228" 00:15:51.455 }, 00:15:51.455 "auth": { 00:15:51.455 "state": "completed", 00:15:51.455 "digest": "sha384", 00:15:51.455 "dhgroup": "null" 00:15:51.455 } 00:15:51.455 } 00:15:51.455 ]' 00:15:51.455 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.455 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.455 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.455 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:51.455 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.455 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.455 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.455 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.713 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:15:51.713 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:15:52.656 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.656 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:52.656 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.656 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.656 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.656 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.656 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:52.656 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:52.915 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:52.915 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.915 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:52.915 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:52.915 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:52.915 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.915 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.915 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.915 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.915 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.915 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.915 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.915 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.173 00:15:53.431 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.431 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.431 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.688 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.688 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.688 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.688 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.688 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.688 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.688 { 00:15:53.688 "cntlid": 53, 00:15:53.688 "qid": 0, 00:15:53.688 "state": "enabled", 00:15:53.688 "thread": "nvmf_tgt_poll_group_000", 00:15:53.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:53.688 "listen_address": { 00:15:53.688 "trtype": "TCP", 00:15:53.688 "adrfam": "IPv4", 00:15:53.688 "traddr": "10.0.0.3", 00:15:53.688 "trsvcid": "4420" 00:15:53.688 }, 00:15:53.688 "peer_address": { 00:15:53.688 "trtype": "TCP", 00:15:53.688 "adrfam": "IPv4", 00:15:53.688 "traddr": "10.0.0.1", 00:15:53.688 "trsvcid": "52240" 00:15:53.688 }, 00:15:53.688 "auth": { 00:15:53.688 "state": "completed", 00:15:53.688 "digest": "sha384", 00:15:53.688 "dhgroup": "null" 00:15:53.688 } 00:15:53.688 } 00:15:53.688 ]' 00:15:53.688 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.688 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.688 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.688 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:53.688 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.688 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.688 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.688 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.254 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:15:54.254 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:15:54.820 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.820 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:54.820 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.820 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.820 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.820 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.820 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:54.820 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:55.387 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:55.387 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.387 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:55.387 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:55.387 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:55.387 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.387 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:15:55.387 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.387 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.387 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.387 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:55.387 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.387 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.646 00:15:55.646 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.646 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.646 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.912 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.912 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.912 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.912 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.912 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.912 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.912 { 00:15:55.912 "cntlid": 55, 00:15:55.912 "qid": 0, 00:15:55.912 "state": "enabled", 00:15:55.912 "thread": "nvmf_tgt_poll_group_000", 00:15:55.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:55.912 "listen_address": { 00:15:55.912 "trtype": "TCP", 00:15:55.912 "adrfam": "IPv4", 00:15:55.912 "traddr": "10.0.0.3", 00:15:55.912 "trsvcid": "4420" 00:15:55.912 }, 00:15:55.912 "peer_address": { 00:15:55.912 "trtype": "TCP", 00:15:55.912 "adrfam": "IPv4", 00:15:55.912 "traddr": "10.0.0.1", 00:15:55.912 "trsvcid": "37714" 00:15:55.912 }, 00:15:55.912 "auth": { 00:15:55.912 "state": "completed", 00:15:55.912 "digest": "sha384", 00:15:55.912 "dhgroup": "null" 00:15:55.912 } 00:15:55.912 } 00:15:55.912 ]' 00:15:55.912 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.171 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.171 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.171 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:56.171 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.171 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.171 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.171 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.429 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:15:56.429 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:15:57.383 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.383 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:57.383 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.383 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.383 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.383 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.383 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.383 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:57.383 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:57.642 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:57.642 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.642 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:57.642 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:57.642 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:57.642 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.642 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.642 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.642 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.642 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.642 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.642 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.642 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.900 00:15:57.900 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.900 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.900 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.468 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.468 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.468 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.468 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.468 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.468 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.468 { 00:15:58.468 "cntlid": 57, 00:15:58.468 "qid": 0, 00:15:58.468 "state": "enabled", 00:15:58.468 "thread": "nvmf_tgt_poll_group_000", 00:15:58.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:15:58.468 "listen_address": { 00:15:58.468 "trtype": "TCP", 00:15:58.468 "adrfam": "IPv4", 00:15:58.468 "traddr": "10.0.0.3", 00:15:58.468 "trsvcid": "4420" 00:15:58.468 }, 00:15:58.468 "peer_address": { 00:15:58.468 "trtype": "TCP", 00:15:58.468 "adrfam": "IPv4", 00:15:58.468 "traddr": "10.0.0.1", 00:15:58.468 "trsvcid": "37736" 00:15:58.468 }, 00:15:58.468 "auth": { 00:15:58.468 "state": "completed", 00:15:58.468 "digest": "sha384", 00:15:58.468 "dhgroup": "ffdhe2048" 00:15:58.468 } 00:15:58.468 } 00:15:58.468 ]' 00:15:58.468 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.468 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.468 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.468 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.468 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.468 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.468 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.468 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.727 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:15:58.727 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:15:59.662 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.662 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:15:59.662 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.662 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.662 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.662 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.662 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:59.662 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:59.921 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:59.921 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.921 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:59.921 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:59.921 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:59.921 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.921 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.921 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.921 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.921 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.921 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.921 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.921 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.487 00:16:00.487 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.487 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.487 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.746 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.746 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.746 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.746 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.746 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.746 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.746 { 00:16:00.746 "cntlid": 59, 00:16:00.746 "qid": 0, 00:16:00.746 "state": "enabled", 00:16:00.746 "thread": "nvmf_tgt_poll_group_000", 00:16:00.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:00.746 "listen_address": { 00:16:00.746 "trtype": "TCP", 00:16:00.746 "adrfam": "IPv4", 00:16:00.746 "traddr": "10.0.0.3", 00:16:00.746 "trsvcid": "4420" 00:16:00.746 }, 00:16:00.746 "peer_address": { 00:16:00.746 "trtype": "TCP", 00:16:00.746 "adrfam": "IPv4", 00:16:00.746 "traddr": "10.0.0.1", 00:16:00.746 "trsvcid": "37772" 00:16:00.746 }, 00:16:00.746 "auth": { 00:16:00.746 "state": "completed", 00:16:00.746 "digest": "sha384", 00:16:00.746 "dhgroup": "ffdhe2048" 00:16:00.746 } 00:16:00.746 } 00:16:00.746 ]' 00:16:00.746 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.746 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.746 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.004 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:01.004 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.004 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.004 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.004 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.262 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:16:01.262 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:16:02.246 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.247 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:02.247 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.247 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.247 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.247 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.247 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:02.247 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:02.504 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:02.504 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.504 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.504 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:02.504 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:02.504 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.504 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.504 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.504 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.504 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.504 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.504 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.504 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.759 00:16:03.016 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.016 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.016 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.275 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.275 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.275 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.275 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.275 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.275 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.275 { 00:16:03.275 "cntlid": 61, 00:16:03.276 "qid": 0, 00:16:03.276 "state": "enabled", 00:16:03.276 "thread": "nvmf_tgt_poll_group_000", 00:16:03.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:03.276 "listen_address": { 00:16:03.276 "trtype": "TCP", 00:16:03.276 "adrfam": "IPv4", 00:16:03.276 "traddr": "10.0.0.3", 00:16:03.276 "trsvcid": "4420" 00:16:03.276 }, 00:16:03.276 "peer_address": { 00:16:03.276 "trtype": "TCP", 00:16:03.276 "adrfam": "IPv4", 00:16:03.276 "traddr": "10.0.0.1", 00:16:03.276 "trsvcid": "37802" 00:16:03.276 }, 00:16:03.276 "auth": { 00:16:03.276 "state": "completed", 00:16:03.276 "digest": "sha384", 00:16:03.276 "dhgroup": "ffdhe2048" 00:16:03.276 } 00:16:03.276 } 00:16:03.276 ]' 00:16:03.276 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.276 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.276 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.276 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:03.276 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.533 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.533 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.533 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.790 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:16:03.790 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:16:04.729 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.729 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:04.729 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.729 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.729 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.729 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.729 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:04.729 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:04.729 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:04.729 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.729 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.729 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:04.729 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:04.729 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.729 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:16:04.729 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.729 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.987 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.987 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:04.987 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.987 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.245 00:16:05.245 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.245 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.245 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.502 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.502 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.502 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.502 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.761 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.761 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.761 { 00:16:05.761 "cntlid": 63, 00:16:05.761 "qid": 0, 00:16:05.761 "state": "enabled", 00:16:05.761 "thread": "nvmf_tgt_poll_group_000", 00:16:05.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:05.761 "listen_address": { 00:16:05.761 "trtype": "TCP", 00:16:05.761 "adrfam": "IPv4", 00:16:05.761 "traddr": "10.0.0.3", 00:16:05.761 "trsvcid": "4420" 00:16:05.761 }, 00:16:05.761 "peer_address": { 00:16:05.761 "trtype": "TCP", 00:16:05.761 "adrfam": "IPv4", 00:16:05.761 "traddr": "10.0.0.1", 00:16:05.761 "trsvcid": "58412" 00:16:05.761 }, 00:16:05.761 "auth": { 00:16:05.761 "state": "completed", 00:16:05.761 "digest": "sha384", 00:16:05.761 "dhgroup": "ffdhe2048" 00:16:05.761 } 00:16:05.761 } 00:16:05.761 ]' 00:16:05.761 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.761 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.761 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.761 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:05.761 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.761 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.761 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.761 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.327 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:16:06.327 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:16:06.892 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.892 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:06.892 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.892 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.892 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.892 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.892 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.892 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:06.892 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:07.149 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:07.149 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.149 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:07.149 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:07.149 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:07.149 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.149 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.149 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.149 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.149 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.149 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.149 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.149 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.714 00:16:07.714 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.714 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.714 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.972 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.972 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.972 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.972 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.972 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.972 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.972 { 00:16:07.972 "cntlid": 65, 00:16:07.972 "qid": 0, 00:16:07.972 "state": "enabled", 00:16:07.972 "thread": "nvmf_tgt_poll_group_000", 00:16:07.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:07.972 "listen_address": { 00:16:07.972 "trtype": "TCP", 00:16:07.972 "adrfam": "IPv4", 00:16:07.972 "traddr": "10.0.0.3", 00:16:07.972 "trsvcid": "4420" 00:16:07.972 }, 00:16:07.972 "peer_address": { 00:16:07.972 "trtype": "TCP", 00:16:07.972 "adrfam": "IPv4", 00:16:07.972 "traddr": "10.0.0.1", 00:16:07.972 "trsvcid": "58444" 00:16:07.972 }, 00:16:07.972 "auth": { 00:16:07.972 "state": "completed", 00:16:07.972 "digest": "sha384", 00:16:07.972 "dhgroup": "ffdhe3072" 00:16:07.972 } 00:16:07.972 } 00:16:07.972 ]' 00:16:07.972 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.972 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.972 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.229 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:08.229 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.229 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.229 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.229 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.487 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:16:08.487 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:16:09.420 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.420 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:09.420 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.420 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.420 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.420 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.420 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:09.420 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:09.678 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:09.678 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.678 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:09.678 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:09.678 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:09.678 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.678 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.678 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.678 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.678 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.678 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.678 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.679 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.936 00:16:09.937 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.937 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.937 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.195 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.195 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.195 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.195 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.195 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.195 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.195 { 00:16:10.195 "cntlid": 67, 00:16:10.195 "qid": 0, 00:16:10.195 "state": "enabled", 00:16:10.195 "thread": "nvmf_tgt_poll_group_000", 00:16:10.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:10.195 "listen_address": { 00:16:10.195 "trtype": "TCP", 00:16:10.195 "adrfam": "IPv4", 00:16:10.195 "traddr": "10.0.0.3", 00:16:10.195 "trsvcid": "4420" 00:16:10.195 }, 00:16:10.195 "peer_address": { 00:16:10.195 "trtype": "TCP", 00:16:10.195 "adrfam": "IPv4", 00:16:10.195 "traddr": "10.0.0.1", 00:16:10.195 "trsvcid": "58472" 00:16:10.195 }, 00:16:10.195 "auth": { 00:16:10.195 "state": "completed", 00:16:10.195 "digest": "sha384", 00:16:10.195 "dhgroup": "ffdhe3072" 00:16:10.195 } 00:16:10.195 } 00:16:10.195 ]' 00:16:10.195 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.195 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.195 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.195 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:10.195 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.506 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.506 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.506 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.506 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:16:10.506 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:16:11.441 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.441 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:11.441 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.441 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.441 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.441 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.441 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:11.441 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:11.699 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:11.699 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.699 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.699 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:11.699 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:11.699 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.699 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.699 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.699 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.699 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.699 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.699 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.699 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.958 00:16:11.958 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.958 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.958 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.215 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.215 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.215 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.215 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.215 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.215 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.215 { 00:16:12.215 "cntlid": 69, 00:16:12.215 "qid": 0, 00:16:12.215 "state": "enabled", 00:16:12.215 "thread": "nvmf_tgt_poll_group_000", 00:16:12.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:12.215 "listen_address": { 00:16:12.215 "trtype": "TCP", 00:16:12.216 "adrfam": "IPv4", 00:16:12.216 "traddr": "10.0.0.3", 00:16:12.216 "trsvcid": "4420" 00:16:12.216 }, 00:16:12.216 "peer_address": { 00:16:12.216 "trtype": "TCP", 00:16:12.216 "adrfam": "IPv4", 00:16:12.216 "traddr": "10.0.0.1", 00:16:12.216 "trsvcid": "58494" 00:16:12.216 }, 00:16:12.216 "auth": { 00:16:12.216 "state": "completed", 00:16:12.216 "digest": "sha384", 00:16:12.216 "dhgroup": "ffdhe3072" 00:16:12.216 } 00:16:12.216 } 00:16:12.216 ]' 00:16:12.216 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.473 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.474 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.474 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:12.474 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.474 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.474 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.474 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.731 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:16:12.731 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:16:13.342 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.342 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:13.342 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.342 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.342 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.342 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.342 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:13.342 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:13.599 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:13.599 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.599 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.599 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:13.599 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.599 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.599 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:16:13.599 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.599 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.599 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.599 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.599 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.599 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.857 00:16:13.857 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.857 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.857 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.116 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.116 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.116 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.116 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.116 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.116 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.116 { 00:16:14.116 "cntlid": 71, 00:16:14.116 "qid": 0, 00:16:14.116 "state": "enabled", 00:16:14.116 "thread": "nvmf_tgt_poll_group_000", 00:16:14.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:14.116 "listen_address": { 00:16:14.116 "trtype": "TCP", 00:16:14.116 "adrfam": "IPv4", 00:16:14.116 "traddr": "10.0.0.3", 00:16:14.116 "trsvcid": "4420" 00:16:14.116 }, 00:16:14.116 "peer_address": { 00:16:14.116 "trtype": "TCP", 00:16:14.116 "adrfam": "IPv4", 00:16:14.116 "traddr": "10.0.0.1", 00:16:14.116 "trsvcid": "58528" 00:16:14.116 }, 00:16:14.116 "auth": { 00:16:14.116 "state": "completed", 00:16:14.116 "digest": "sha384", 00:16:14.116 "dhgroup": "ffdhe3072" 00:16:14.116 } 00:16:14.116 } 00:16:14.116 ]' 00:16:14.116 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.374 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.374 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.374 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:14.374 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.374 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.374 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.374 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.632 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:16:14.632 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:16:15.566 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.566 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:15.566 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.566 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.566 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.566 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.566 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.566 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:15.566 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:15.826 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:15.826 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.826 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:15.826 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:15.826 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:15.826 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.826 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.826 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.826 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.826 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.826 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.826 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.826 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.088 00:16:16.347 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.347 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.347 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.605 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.605 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.605 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.605 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.605 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.605 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.605 { 00:16:16.605 "cntlid": 73, 00:16:16.605 "qid": 0, 00:16:16.605 "state": "enabled", 00:16:16.605 "thread": "nvmf_tgt_poll_group_000", 00:16:16.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:16.605 "listen_address": { 00:16:16.605 "trtype": "TCP", 00:16:16.605 "adrfam": "IPv4", 00:16:16.605 "traddr": "10.0.0.3", 00:16:16.605 "trsvcid": "4420" 00:16:16.605 }, 00:16:16.605 "peer_address": { 00:16:16.605 "trtype": "TCP", 00:16:16.605 "adrfam": "IPv4", 00:16:16.605 "traddr": "10.0.0.1", 00:16:16.605 "trsvcid": "41502" 00:16:16.605 }, 00:16:16.605 "auth": { 00:16:16.605 "state": "completed", 00:16:16.605 "digest": "sha384", 00:16:16.605 "dhgroup": "ffdhe4096" 00:16:16.605 } 00:16:16.605 } 00:16:16.605 ]' 00:16:16.605 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.605 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.605 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.605 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:16.605 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.605 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.605 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.605 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.864 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:16:16.864 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:16:17.799 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.799 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:17.799 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.799 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.799 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.799 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.799 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:17.799 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:18.057 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:18.057 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.057 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.057 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:18.057 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:18.057 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.057 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.057 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.057 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.057 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.057 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.057 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.057 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.315 00:16:18.315 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.315 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.315 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.574 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.574 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.574 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.574 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.574 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.574 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.574 { 00:16:18.574 "cntlid": 75, 00:16:18.574 "qid": 0, 00:16:18.574 "state": "enabled", 00:16:18.574 "thread": "nvmf_tgt_poll_group_000", 00:16:18.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:18.574 "listen_address": { 00:16:18.574 "trtype": "TCP", 00:16:18.574 "adrfam": "IPv4", 00:16:18.574 "traddr": "10.0.0.3", 00:16:18.574 "trsvcid": "4420" 00:16:18.574 }, 00:16:18.574 "peer_address": { 00:16:18.574 "trtype": "TCP", 00:16:18.574 "adrfam": "IPv4", 00:16:18.574 "traddr": "10.0.0.1", 00:16:18.574 "trsvcid": "41526" 00:16:18.574 }, 00:16:18.574 "auth": { 00:16:18.574 "state": "completed", 00:16:18.574 "digest": "sha384", 00:16:18.574 "dhgroup": "ffdhe4096" 00:16:18.574 } 00:16:18.574 } 00:16:18.574 ]' 00:16:18.574 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.574 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.574 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.832 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:18.832 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.832 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.832 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.832 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.090 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:16:19.090 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.027 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.597 00:16:20.597 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.597 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.597 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.856 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.856 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.856 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.856 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.856 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.856 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.856 { 00:16:20.856 "cntlid": 77, 00:16:20.856 "qid": 0, 00:16:20.856 "state": "enabled", 00:16:20.856 "thread": "nvmf_tgt_poll_group_000", 00:16:20.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:20.856 "listen_address": { 00:16:20.856 "trtype": "TCP", 00:16:20.856 "adrfam": "IPv4", 00:16:20.856 "traddr": "10.0.0.3", 00:16:20.856 "trsvcid": "4420" 00:16:20.856 }, 00:16:20.856 "peer_address": { 00:16:20.856 "trtype": "TCP", 00:16:20.856 "adrfam": "IPv4", 00:16:20.856 "traddr": "10.0.0.1", 00:16:20.856 "trsvcid": "41558" 00:16:20.856 }, 00:16:20.856 "auth": { 00:16:20.856 "state": "completed", 00:16:20.856 "digest": "sha384", 00:16:20.856 "dhgroup": "ffdhe4096" 00:16:20.856 } 00:16:20.856 } 00:16:20.856 ]' 00:16:20.856 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.856 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.856 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.114 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:21.114 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.114 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.114 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.114 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.372 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:16:21.372 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:16:21.940 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.199 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:22.199 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.199 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.199 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.199 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.199 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:22.199 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:22.457 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:22.457 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.457 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:22.457 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:22.457 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:22.458 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.458 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:16:22.458 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.458 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.458 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.458 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:22.458 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.458 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.716 00:16:22.716 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.716 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.716 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.343 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.343 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.343 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.343 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.343 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.343 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.343 { 00:16:23.343 "cntlid": 79, 00:16:23.343 "qid": 0, 00:16:23.343 "state": "enabled", 00:16:23.343 "thread": "nvmf_tgt_poll_group_000", 00:16:23.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:23.343 "listen_address": { 00:16:23.343 "trtype": "TCP", 00:16:23.343 "adrfam": "IPv4", 00:16:23.343 "traddr": "10.0.0.3", 00:16:23.343 "trsvcid": "4420" 00:16:23.343 }, 00:16:23.343 "peer_address": { 00:16:23.343 "trtype": "TCP", 00:16:23.343 "adrfam": "IPv4", 00:16:23.343 "traddr": "10.0.0.1", 00:16:23.343 "trsvcid": "41566" 00:16:23.343 }, 00:16:23.343 "auth": { 00:16:23.343 "state": "completed", 00:16:23.343 "digest": "sha384", 00:16:23.343 "dhgroup": "ffdhe4096" 00:16:23.343 } 00:16:23.343 } 00:16:23.343 ]' 00:16:23.343 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.343 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.343 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.343 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:23.343 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.343 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.343 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.343 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.601 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:16:23.601 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:16:24.535 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.535 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:24.535 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.535 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.535 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.535 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.535 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.535 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:24.535 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:24.535 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:24.535 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.535 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:24.535 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:24.535 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:24.535 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.536 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.536 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.536 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.794 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.794 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.794 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.794 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.051 00:16:25.051 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.051 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.051 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.310 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.310 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.310 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.310 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.568 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.568 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.568 { 00:16:25.568 "cntlid": 81, 00:16:25.568 "qid": 0, 00:16:25.568 "state": "enabled", 00:16:25.568 "thread": "nvmf_tgt_poll_group_000", 00:16:25.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:25.568 "listen_address": { 00:16:25.568 "trtype": "TCP", 00:16:25.568 "adrfam": "IPv4", 00:16:25.568 "traddr": "10.0.0.3", 00:16:25.568 "trsvcid": "4420" 00:16:25.568 }, 00:16:25.568 "peer_address": { 00:16:25.568 "trtype": "TCP", 00:16:25.568 "adrfam": "IPv4", 00:16:25.568 "traddr": "10.0.0.1", 00:16:25.568 "trsvcid": "50900" 00:16:25.568 }, 00:16:25.569 "auth": { 00:16:25.569 "state": "completed", 00:16:25.569 "digest": "sha384", 00:16:25.569 "dhgroup": "ffdhe6144" 00:16:25.569 } 00:16:25.569 } 00:16:25.569 ]' 00:16:25.569 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.569 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.569 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.569 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:25.569 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.569 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.569 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.569 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.827 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:16:25.827 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.814 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.815 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.381 00:16:27.381 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.381 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.381 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.640 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.640 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.640 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.640 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.640 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.640 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.640 { 00:16:27.640 "cntlid": 83, 00:16:27.640 "qid": 0, 00:16:27.640 "state": "enabled", 00:16:27.640 "thread": "nvmf_tgt_poll_group_000", 00:16:27.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:27.640 "listen_address": { 00:16:27.640 "trtype": "TCP", 00:16:27.640 "adrfam": "IPv4", 00:16:27.640 "traddr": "10.0.0.3", 00:16:27.640 "trsvcid": "4420" 00:16:27.640 }, 00:16:27.640 "peer_address": { 00:16:27.640 "trtype": "TCP", 00:16:27.640 "adrfam": "IPv4", 00:16:27.640 "traddr": "10.0.0.1", 00:16:27.640 "trsvcid": "50936" 00:16:27.640 }, 00:16:27.640 "auth": { 00:16:27.640 "state": "completed", 00:16:27.640 "digest": "sha384", 00:16:27.640 "dhgroup": "ffdhe6144" 00:16:27.640 } 00:16:27.640 } 00:16:27.640 ]' 00:16:27.640 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.640 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.640 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.901 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:27.901 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.901 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.901 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.901 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.160 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:16:28.160 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:16:28.798 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.798 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:28.798 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.798 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.798 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.798 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.798 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:28.798 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:29.056 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:29.056 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.056 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:29.056 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:29.056 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.056 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.056 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.056 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.056 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.056 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.057 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.057 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.057 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.624 00:16:29.624 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.624 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.624 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.883 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.883 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.883 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.883 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.883 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.883 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.883 { 00:16:29.883 "cntlid": 85, 00:16:29.883 "qid": 0, 00:16:29.883 "state": "enabled", 00:16:29.883 "thread": "nvmf_tgt_poll_group_000", 00:16:29.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:29.883 "listen_address": { 00:16:29.883 "trtype": "TCP", 00:16:29.883 "adrfam": "IPv4", 00:16:29.883 "traddr": "10.0.0.3", 00:16:29.883 "trsvcid": "4420" 00:16:29.883 }, 00:16:29.883 "peer_address": { 00:16:29.883 "trtype": "TCP", 00:16:29.883 "adrfam": "IPv4", 00:16:29.883 "traddr": "10.0.0.1", 00:16:29.883 "trsvcid": "50968" 00:16:29.883 }, 00:16:29.883 "auth": { 00:16:29.883 "state": "completed", 00:16:29.883 "digest": "sha384", 00:16:29.883 "dhgroup": "ffdhe6144" 00:16:29.883 } 00:16:29.883 } 00:16:29.883 ]' 00:16:29.883 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.883 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.883 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.883 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:29.883 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.883 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.883 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.883 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.141 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:16:30.141 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.082 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.648 00:16:31.648 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.648 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.648 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.906 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.906 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.906 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.906 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.906 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.906 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.906 { 00:16:31.906 "cntlid": 87, 00:16:31.906 "qid": 0, 00:16:31.906 "state": "enabled", 00:16:31.906 "thread": "nvmf_tgt_poll_group_000", 00:16:31.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:31.906 "listen_address": { 00:16:31.906 "trtype": "TCP", 00:16:31.906 "adrfam": "IPv4", 00:16:31.906 "traddr": "10.0.0.3", 00:16:31.906 "trsvcid": "4420" 00:16:31.906 }, 00:16:31.906 "peer_address": { 00:16:31.906 "trtype": "TCP", 00:16:31.906 "adrfam": "IPv4", 00:16:31.906 "traddr": "10.0.0.1", 00:16:31.906 "trsvcid": "50982" 00:16:31.906 }, 00:16:31.906 "auth": { 00:16:31.906 "state": "completed", 00:16:31.906 "digest": "sha384", 00:16:31.906 "dhgroup": "ffdhe6144" 00:16:31.906 } 00:16:31.906 } 00:16:31.906 ]' 00:16:31.906 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.906 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.164 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.164 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:32.164 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.164 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.164 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.164 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.422 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:16:32.422 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:16:33.000 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.000 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:33.000 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.000 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.000 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.000 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.000 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.000 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:33.000 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:33.260 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:33.260 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.260 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:33.260 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:33.260 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:33.260 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.260 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.260 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.260 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.260 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.260 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.260 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.260 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.194 00:16:34.194 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.194 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.194 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.194 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.194 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.194 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.194 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.194 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.194 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.194 { 00:16:34.194 "cntlid": 89, 00:16:34.194 "qid": 0, 00:16:34.194 "state": "enabled", 00:16:34.194 "thread": "nvmf_tgt_poll_group_000", 00:16:34.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:34.194 "listen_address": { 00:16:34.194 "trtype": "TCP", 00:16:34.194 "adrfam": "IPv4", 00:16:34.194 "traddr": "10.0.0.3", 00:16:34.194 "trsvcid": "4420" 00:16:34.194 }, 00:16:34.194 "peer_address": { 00:16:34.194 "trtype": "TCP", 00:16:34.194 "adrfam": "IPv4", 00:16:34.194 "traddr": "10.0.0.1", 00:16:34.194 "trsvcid": "51018" 00:16:34.194 }, 00:16:34.194 "auth": { 00:16:34.194 "state": "completed", 00:16:34.194 "digest": "sha384", 00:16:34.194 "dhgroup": "ffdhe8192" 00:16:34.194 } 00:16:34.194 } 00:16:34.194 ]' 00:16:34.194 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.454 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.454 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.454 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:34.454 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.454 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.454 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.454 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.712 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:16:34.712 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:16:35.278 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.278 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:35.278 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.278 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.278 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.278 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.279 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:35.279 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:35.845 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:35.845 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.845 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:35.845 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:35.845 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:35.845 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.845 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.845 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.845 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.845 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.845 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.845 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.845 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.411 00:16:36.411 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.411 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.411 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.669 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.669 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.669 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.669 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.669 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.669 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.669 { 00:16:36.669 "cntlid": 91, 00:16:36.669 "qid": 0, 00:16:36.669 "state": "enabled", 00:16:36.669 "thread": "nvmf_tgt_poll_group_000", 00:16:36.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:36.669 "listen_address": { 00:16:36.669 "trtype": "TCP", 00:16:36.669 "adrfam": "IPv4", 00:16:36.669 "traddr": "10.0.0.3", 00:16:36.669 "trsvcid": "4420" 00:16:36.669 }, 00:16:36.669 "peer_address": { 00:16:36.669 "trtype": "TCP", 00:16:36.669 "adrfam": "IPv4", 00:16:36.669 "traddr": "10.0.0.1", 00:16:36.669 "trsvcid": "42504" 00:16:36.669 }, 00:16:36.669 "auth": { 00:16:36.669 "state": "completed", 00:16:36.669 "digest": "sha384", 00:16:36.669 "dhgroup": "ffdhe8192" 00:16:36.669 } 00:16:36.669 } 00:16:36.669 ]' 00:16:36.669 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.669 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.669 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.669 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:36.669 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.669 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.669 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.669 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.323 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:16:37.323 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:16:37.889 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.889 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:37.889 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.889 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.889 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.889 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.889 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:37.889 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:38.148 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:38.148 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.148 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:38.148 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:38.148 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.148 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.148 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.148 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.148 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.148 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.148 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.148 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.148 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.714 00:16:38.714 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.714 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.714 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.972 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.972 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.972 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.972 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.972 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.972 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.972 { 00:16:38.972 "cntlid": 93, 00:16:38.972 "qid": 0, 00:16:38.972 "state": "enabled", 00:16:38.972 "thread": "nvmf_tgt_poll_group_000", 00:16:38.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:38.972 "listen_address": { 00:16:38.972 "trtype": "TCP", 00:16:38.972 "adrfam": "IPv4", 00:16:38.972 "traddr": "10.0.0.3", 00:16:38.972 "trsvcid": "4420" 00:16:38.972 }, 00:16:38.972 "peer_address": { 00:16:38.972 "trtype": "TCP", 00:16:38.972 "adrfam": "IPv4", 00:16:38.972 "traddr": "10.0.0.1", 00:16:38.972 "trsvcid": "42546" 00:16:38.972 }, 00:16:38.972 "auth": { 00:16:38.972 "state": "completed", 00:16:38.973 "digest": "sha384", 00:16:38.973 "dhgroup": "ffdhe8192" 00:16:38.973 } 00:16:38.973 } 00:16:38.973 ]' 00:16:38.973 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.230 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.230 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.230 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:39.230 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.230 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.230 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.230 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.488 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:16:39.488 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.424 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.359 00:16:41.359 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.359 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.359 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.617 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.617 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.617 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.617 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.617 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.617 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.617 { 00:16:41.617 "cntlid": 95, 00:16:41.617 "qid": 0, 00:16:41.617 "state": "enabled", 00:16:41.617 "thread": "nvmf_tgt_poll_group_000", 00:16:41.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:41.617 "listen_address": { 00:16:41.617 "trtype": "TCP", 00:16:41.617 "adrfam": "IPv4", 00:16:41.617 "traddr": "10.0.0.3", 00:16:41.617 "trsvcid": "4420" 00:16:41.617 }, 00:16:41.617 "peer_address": { 00:16:41.617 "trtype": "TCP", 00:16:41.617 "adrfam": "IPv4", 00:16:41.617 "traddr": "10.0.0.1", 00:16:41.617 "trsvcid": "42590" 00:16:41.617 }, 00:16:41.617 "auth": { 00:16:41.617 "state": "completed", 00:16:41.617 "digest": "sha384", 00:16:41.617 "dhgroup": "ffdhe8192" 00:16:41.617 } 00:16:41.617 } 00:16:41.617 ]' 00:16:41.617 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.617 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.617 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.617 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:41.617 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.617 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.617 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.617 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.875 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:16:41.875 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:16:42.851 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.851 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:42.851 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.851 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.851 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.851 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:42.851 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.851 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.851 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:42.851 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:43.109 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:43.109 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.109 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.109 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:43.109 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.109 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.109 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.109 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.109 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.109 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.109 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.109 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.109 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.367 00:16:43.367 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.367 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.367 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.624 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.624 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.624 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.624 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.624 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.624 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.624 { 00:16:43.624 "cntlid": 97, 00:16:43.624 "qid": 0, 00:16:43.624 "state": "enabled", 00:16:43.624 "thread": "nvmf_tgt_poll_group_000", 00:16:43.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:43.624 "listen_address": { 00:16:43.624 "trtype": "TCP", 00:16:43.624 "adrfam": "IPv4", 00:16:43.624 "traddr": "10.0.0.3", 00:16:43.624 "trsvcid": "4420" 00:16:43.624 }, 00:16:43.624 "peer_address": { 00:16:43.624 "trtype": "TCP", 00:16:43.624 "adrfam": "IPv4", 00:16:43.624 "traddr": "10.0.0.1", 00:16:43.624 "trsvcid": "42614" 00:16:43.624 }, 00:16:43.624 "auth": { 00:16:43.624 "state": "completed", 00:16:43.624 "digest": "sha512", 00:16:43.624 "dhgroup": "null" 00:16:43.624 } 00:16:43.624 } 00:16:43.624 ]' 00:16:43.624 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.900 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.900 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.900 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:43.900 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.900 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.900 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.900 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.158 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:16:44.158 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:16:45.090 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.090 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:45.090 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.090 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.090 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.090 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.090 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:45.090 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:45.349 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:45.349 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.349 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.349 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:45.349 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:45.349 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.349 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.349 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.349 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.349 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.349 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.349 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.349 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.608 00:16:45.608 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.608 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.608 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.866 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.866 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.866 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.866 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.866 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.866 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.866 { 00:16:45.866 "cntlid": 99, 00:16:45.866 "qid": 0, 00:16:45.866 "state": "enabled", 00:16:45.866 "thread": "nvmf_tgt_poll_group_000", 00:16:45.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:45.866 "listen_address": { 00:16:45.866 "trtype": "TCP", 00:16:45.866 "adrfam": "IPv4", 00:16:45.866 "traddr": "10.0.0.3", 00:16:45.866 "trsvcid": "4420" 00:16:45.866 }, 00:16:45.866 "peer_address": { 00:16:45.866 "trtype": "TCP", 00:16:45.866 "adrfam": "IPv4", 00:16:45.866 "traddr": "10.0.0.1", 00:16:45.866 "trsvcid": "60012" 00:16:45.866 }, 00:16:45.867 "auth": { 00:16:45.867 "state": "completed", 00:16:45.867 "digest": "sha512", 00:16:45.867 "dhgroup": "null" 00:16:45.867 } 00:16:45.867 } 00:16:45.867 ]' 00:16:45.867 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.125 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.125 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.125 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:46.125 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.125 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.125 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.125 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.383 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:16:46.383 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:16:46.950 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.950 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:46.950 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.950 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.208 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.208 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.208 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:47.208 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:47.466 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:47.466 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.466 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.466 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:47.466 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:47.466 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.466 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.466 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.466 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.466 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.466 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.466 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.466 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.725 00:16:47.725 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.725 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.725 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.983 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.983 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.983 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.983 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.983 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.983 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.983 { 00:16:47.983 "cntlid": 101, 00:16:47.983 "qid": 0, 00:16:47.983 "state": "enabled", 00:16:47.983 "thread": "nvmf_tgt_poll_group_000", 00:16:47.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:47.983 "listen_address": { 00:16:47.983 "trtype": "TCP", 00:16:47.983 "adrfam": "IPv4", 00:16:47.983 "traddr": "10.0.0.3", 00:16:47.983 "trsvcid": "4420" 00:16:47.983 }, 00:16:47.983 "peer_address": { 00:16:47.983 "trtype": "TCP", 00:16:47.983 "adrfam": "IPv4", 00:16:47.983 "traddr": "10.0.0.1", 00:16:47.983 "trsvcid": "60040" 00:16:47.983 }, 00:16:47.983 "auth": { 00:16:47.983 "state": "completed", 00:16:47.983 "digest": "sha512", 00:16:47.983 "dhgroup": "null" 00:16:47.983 } 00:16:47.983 } 00:16:47.983 ]' 00:16:47.983 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.983 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.983 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.983 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:47.983 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.983 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.983 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.983 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.289 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:16:48.289 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:16:49.223 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.223 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:49.223 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.223 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.223 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.223 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.223 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:49.223 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:49.481 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:49.481 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.481 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.481 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:49.481 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:49.481 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.481 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:16:49.481 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.481 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.481 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.481 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:49.481 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.481 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.739 00:16:49.739 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.739 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.739 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.997 { 00:16:49.997 "cntlid": 103, 00:16:49.997 "qid": 0, 00:16:49.997 "state": "enabled", 00:16:49.997 "thread": "nvmf_tgt_poll_group_000", 00:16:49.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:49.997 "listen_address": { 00:16:49.997 "trtype": "TCP", 00:16:49.997 "adrfam": "IPv4", 00:16:49.997 "traddr": "10.0.0.3", 00:16:49.997 "trsvcid": "4420" 00:16:49.997 }, 00:16:49.997 "peer_address": { 00:16:49.997 "trtype": "TCP", 00:16:49.997 "adrfam": "IPv4", 00:16:49.997 "traddr": "10.0.0.1", 00:16:49.997 "trsvcid": "60074" 00:16:49.997 }, 00:16:49.997 "auth": { 00:16:49.997 "state": "completed", 00:16:49.997 "digest": "sha512", 00:16:49.997 "dhgroup": "null" 00:16:49.997 } 00:16:49.997 } 00:16:49.997 ]' 00:16:49.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.256 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.256 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.256 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:50.256 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.256 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.256 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.256 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.514 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:16:50.514 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:16:51.448 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.448 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:51.448 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.448 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.448 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.448 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.448 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.448 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:51.448 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:51.706 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:51.706 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.706 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.706 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:51.706 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:51.706 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.706 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.706 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.706 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.706 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.706 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.706 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.706 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.272 00:16:52.272 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.272 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.272 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.530 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.530 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.530 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.530 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.530 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.530 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.530 { 00:16:52.530 "cntlid": 105, 00:16:52.530 "qid": 0, 00:16:52.530 "state": "enabled", 00:16:52.530 "thread": "nvmf_tgt_poll_group_000", 00:16:52.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:52.530 "listen_address": { 00:16:52.530 "trtype": "TCP", 00:16:52.530 "adrfam": "IPv4", 00:16:52.530 "traddr": "10.0.0.3", 00:16:52.530 "trsvcid": "4420" 00:16:52.530 }, 00:16:52.530 "peer_address": { 00:16:52.530 "trtype": "TCP", 00:16:52.530 "adrfam": "IPv4", 00:16:52.530 "traddr": "10.0.0.1", 00:16:52.530 "trsvcid": "60104" 00:16:52.530 }, 00:16:52.530 "auth": { 00:16:52.530 "state": "completed", 00:16:52.530 "digest": "sha512", 00:16:52.530 "dhgroup": "ffdhe2048" 00:16:52.530 } 00:16:52.530 } 00:16:52.530 ]' 00:16:52.530 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.530 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.530 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.530 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:52.530 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.530 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.530 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.530 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.093 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:16:53.093 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:16:53.671 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.672 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:53.672 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.672 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.672 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.672 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.672 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:53.672 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:53.929 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:53.929 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.929 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.929 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:53.929 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:53.929 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.929 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.929 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.929 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.929 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.929 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.929 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.929 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.494 00:16:54.494 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.494 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.494 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.494 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.494 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.494 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.494 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.494 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.494 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.494 { 00:16:54.494 "cntlid": 107, 00:16:54.494 "qid": 0, 00:16:54.494 "state": "enabled", 00:16:54.494 "thread": "nvmf_tgt_poll_group_000", 00:16:54.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:54.494 "listen_address": { 00:16:54.494 "trtype": "TCP", 00:16:54.494 "adrfam": "IPv4", 00:16:54.494 "traddr": "10.0.0.3", 00:16:54.494 "trsvcid": "4420" 00:16:54.494 }, 00:16:54.494 "peer_address": { 00:16:54.494 "trtype": "TCP", 00:16:54.494 "adrfam": "IPv4", 00:16:54.494 "traddr": "10.0.0.1", 00:16:54.494 "trsvcid": "60134" 00:16:54.494 }, 00:16:54.494 "auth": { 00:16:54.494 "state": "completed", 00:16:54.494 "digest": "sha512", 00:16:54.494 "dhgroup": "ffdhe2048" 00:16:54.494 } 00:16:54.494 } 00:16:54.494 ]' 00:16:54.494 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.757 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.757 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.757 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:54.757 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.757 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.757 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.757 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.027 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:16:55.027 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:16:55.593 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.593 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:55.593 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.593 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.593 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.593 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.593 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:55.593 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:56.161 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:56.161 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.161 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:56.161 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:56.161 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:56.161 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.161 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.161 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.161 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.161 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.161 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.161 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.161 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.419 00:16:56.419 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.419 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.419 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.677 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.677 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.677 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.677 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.677 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.677 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.677 { 00:16:56.677 "cntlid": 109, 00:16:56.677 "qid": 0, 00:16:56.677 "state": "enabled", 00:16:56.677 "thread": "nvmf_tgt_poll_group_000", 00:16:56.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:56.677 "listen_address": { 00:16:56.677 "trtype": "TCP", 00:16:56.677 "adrfam": "IPv4", 00:16:56.677 "traddr": "10.0.0.3", 00:16:56.677 "trsvcid": "4420" 00:16:56.677 }, 00:16:56.677 "peer_address": { 00:16:56.677 "trtype": "TCP", 00:16:56.677 "adrfam": "IPv4", 00:16:56.677 "traddr": "10.0.0.1", 00:16:56.677 "trsvcid": "53416" 00:16:56.677 }, 00:16:56.677 "auth": { 00:16:56.677 "state": "completed", 00:16:56.677 "digest": "sha512", 00:16:56.677 "dhgroup": "ffdhe2048" 00:16:56.677 } 00:16:56.677 } 00:16:56.677 ]' 00:16:56.677 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.677 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.677 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.935 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:56.935 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.935 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.935 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.935 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.192 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:16:57.192 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.125 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.690 00:16:58.690 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.690 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.690 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.948 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.948 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.948 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.948 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.948 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.948 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.948 { 00:16:58.948 "cntlid": 111, 00:16:58.948 "qid": 0, 00:16:58.948 "state": "enabled", 00:16:58.948 "thread": "nvmf_tgt_poll_group_000", 00:16:58.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:16:58.948 "listen_address": { 00:16:58.948 "trtype": "TCP", 00:16:58.948 "adrfam": "IPv4", 00:16:58.948 "traddr": "10.0.0.3", 00:16:58.948 "trsvcid": "4420" 00:16:58.948 }, 00:16:58.948 "peer_address": { 00:16:58.948 "trtype": "TCP", 00:16:58.948 "adrfam": "IPv4", 00:16:58.948 "traddr": "10.0.0.1", 00:16:58.948 "trsvcid": "53434" 00:16:58.948 }, 00:16:58.948 "auth": { 00:16:58.948 "state": "completed", 00:16:58.948 "digest": "sha512", 00:16:58.948 "dhgroup": "ffdhe2048" 00:16:58.948 } 00:16:58.948 } 00:16:58.948 ]' 00:16:58.948 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.948 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.948 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.948 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:58.948 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.206 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.206 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.206 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.463 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:16:59.463 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:17:00.029 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.029 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:00.029 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.029 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.287 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.287 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.287 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.287 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:00.287 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:00.545 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:00.545 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.545 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.545 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:00.545 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:00.545 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.545 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.545 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.545 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.545 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.545 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.545 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.545 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.803 00:17:00.803 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.803 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.803 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.369 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.369 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.369 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.369 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.369 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.369 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.369 { 00:17:01.369 "cntlid": 113, 00:17:01.369 "qid": 0, 00:17:01.369 "state": "enabled", 00:17:01.369 "thread": "nvmf_tgt_poll_group_000", 00:17:01.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:01.369 "listen_address": { 00:17:01.369 "trtype": "TCP", 00:17:01.369 "adrfam": "IPv4", 00:17:01.369 "traddr": "10.0.0.3", 00:17:01.369 "trsvcid": "4420" 00:17:01.369 }, 00:17:01.369 "peer_address": { 00:17:01.369 "trtype": "TCP", 00:17:01.369 "adrfam": "IPv4", 00:17:01.369 "traddr": "10.0.0.1", 00:17:01.369 "trsvcid": "53452" 00:17:01.369 }, 00:17:01.369 "auth": { 00:17:01.369 "state": "completed", 00:17:01.369 "digest": "sha512", 00:17:01.369 "dhgroup": "ffdhe3072" 00:17:01.369 } 00:17:01.369 } 00:17:01.369 ]' 00:17:01.369 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.369 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.369 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.369 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:01.369 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.369 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.369 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.369 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.627 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:17:01.627 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:17:02.560 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.560 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:02.560 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.560 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.560 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.560 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.560 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.560 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.818 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:02.818 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.818 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.818 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:02.818 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:02.818 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.818 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.818 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.818 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.818 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.818 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.818 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.818 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.075 00:17:03.075 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.075 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.075 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.665 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.665 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.665 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.665 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.665 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.665 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.665 { 00:17:03.665 "cntlid": 115, 00:17:03.665 "qid": 0, 00:17:03.665 "state": "enabled", 00:17:03.665 "thread": "nvmf_tgt_poll_group_000", 00:17:03.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:03.665 "listen_address": { 00:17:03.665 "trtype": "TCP", 00:17:03.665 "adrfam": "IPv4", 00:17:03.665 "traddr": "10.0.0.3", 00:17:03.665 "trsvcid": "4420" 00:17:03.665 }, 00:17:03.665 "peer_address": { 00:17:03.665 "trtype": "TCP", 00:17:03.665 "adrfam": "IPv4", 00:17:03.665 "traddr": "10.0.0.1", 00:17:03.665 "trsvcid": "53482" 00:17:03.665 }, 00:17:03.665 "auth": { 00:17:03.665 "state": "completed", 00:17:03.665 "digest": "sha512", 00:17:03.665 "dhgroup": "ffdhe3072" 00:17:03.665 } 00:17:03.665 } 00:17:03.665 ]' 00:17:03.665 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.665 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.665 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.665 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:03.665 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.665 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.665 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.665 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.923 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:17:03.923 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:17:04.488 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.489 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:04.489 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.489 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.489 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.489 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.489 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:04.489 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:04.747 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:04.747 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.747 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:04.747 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:04.747 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:04.747 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.747 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.747 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.747 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.747 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.747 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.747 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.747 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.312 00:17:05.312 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.312 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.312 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.569 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.569 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.569 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.569 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.569 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.569 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.569 { 00:17:05.569 "cntlid": 117, 00:17:05.569 "qid": 0, 00:17:05.569 "state": "enabled", 00:17:05.569 "thread": "nvmf_tgt_poll_group_000", 00:17:05.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:05.569 "listen_address": { 00:17:05.569 "trtype": "TCP", 00:17:05.569 "adrfam": "IPv4", 00:17:05.569 "traddr": "10.0.0.3", 00:17:05.569 "trsvcid": "4420" 00:17:05.569 }, 00:17:05.569 "peer_address": { 00:17:05.569 "trtype": "TCP", 00:17:05.569 "adrfam": "IPv4", 00:17:05.569 "traddr": "10.0.0.1", 00:17:05.569 "trsvcid": "38364" 00:17:05.569 }, 00:17:05.569 "auth": { 00:17:05.569 "state": "completed", 00:17:05.569 "digest": "sha512", 00:17:05.569 "dhgroup": "ffdhe3072" 00:17:05.569 } 00:17:05.569 } 00:17:05.569 ]' 00:17:05.569 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.569 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.569 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.569 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:05.569 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.569 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.569 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.569 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.140 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:17:06.140 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:17:06.705 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.705 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:06.705 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.705 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.705 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.705 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.705 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:06.705 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:06.963 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:06.963 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.964 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:06.964 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:06.964 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.964 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.964 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:17:06.964 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.964 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.964 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.964 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.964 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.964 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.222 00:17:07.480 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.480 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.480 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.739 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.739 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.739 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.739 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.739 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.739 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.739 { 00:17:07.739 "cntlid": 119, 00:17:07.739 "qid": 0, 00:17:07.739 "state": "enabled", 00:17:07.739 "thread": "nvmf_tgt_poll_group_000", 00:17:07.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:07.739 "listen_address": { 00:17:07.739 "trtype": "TCP", 00:17:07.739 "adrfam": "IPv4", 00:17:07.739 "traddr": "10.0.0.3", 00:17:07.739 "trsvcid": "4420" 00:17:07.739 }, 00:17:07.739 "peer_address": { 00:17:07.739 "trtype": "TCP", 00:17:07.739 "adrfam": "IPv4", 00:17:07.739 "traddr": "10.0.0.1", 00:17:07.739 "trsvcid": "38386" 00:17:07.739 }, 00:17:07.739 "auth": { 00:17:07.739 "state": "completed", 00:17:07.739 "digest": "sha512", 00:17:07.739 "dhgroup": "ffdhe3072" 00:17:07.739 } 00:17:07.739 } 00:17:07.739 ]' 00:17:07.739 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.739 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.739 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.739 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:07.739 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.739 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.739 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.739 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.997 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:17:07.997 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.965 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.532 00:17:09.532 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.532 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.532 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.790 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.790 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.790 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.790 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.790 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.790 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.790 { 00:17:09.790 "cntlid": 121, 00:17:09.790 "qid": 0, 00:17:09.790 "state": "enabled", 00:17:09.790 "thread": "nvmf_tgt_poll_group_000", 00:17:09.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:09.790 "listen_address": { 00:17:09.790 "trtype": "TCP", 00:17:09.790 "adrfam": "IPv4", 00:17:09.790 "traddr": "10.0.0.3", 00:17:09.790 "trsvcid": "4420" 00:17:09.790 }, 00:17:09.790 "peer_address": { 00:17:09.790 "trtype": "TCP", 00:17:09.790 "adrfam": "IPv4", 00:17:09.790 "traddr": "10.0.0.1", 00:17:09.790 "trsvcid": "38426" 00:17:09.790 }, 00:17:09.790 "auth": { 00:17:09.790 "state": "completed", 00:17:09.790 "digest": "sha512", 00:17:09.790 "dhgroup": "ffdhe4096" 00:17:09.790 } 00:17:09.790 } 00:17:09.790 ]' 00:17:09.790 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.790 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.790 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.790 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:09.790 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.790 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.790 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.790 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.357 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:17:10.357 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:17:10.935 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.935 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:10.935 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.935 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.935 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.935 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.935 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.935 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:11.205 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:11.205 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.205 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.205 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:11.205 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:11.205 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.205 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.205 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.205 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.205 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.205 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.205 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.205 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.770 00:17:11.770 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.770 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.770 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.028 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.028 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.028 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.028 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.028 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.028 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.028 { 00:17:12.028 "cntlid": 123, 00:17:12.028 "qid": 0, 00:17:12.028 "state": "enabled", 00:17:12.028 "thread": "nvmf_tgt_poll_group_000", 00:17:12.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:12.028 "listen_address": { 00:17:12.028 "trtype": "TCP", 00:17:12.028 "adrfam": "IPv4", 00:17:12.028 "traddr": "10.0.0.3", 00:17:12.028 "trsvcid": "4420" 00:17:12.028 }, 00:17:12.028 "peer_address": { 00:17:12.028 "trtype": "TCP", 00:17:12.028 "adrfam": "IPv4", 00:17:12.028 "traddr": "10.0.0.1", 00:17:12.028 "trsvcid": "38462" 00:17:12.028 }, 00:17:12.028 "auth": { 00:17:12.028 "state": "completed", 00:17:12.028 "digest": "sha512", 00:17:12.028 "dhgroup": "ffdhe4096" 00:17:12.028 } 00:17:12.028 } 00:17:12.028 ]' 00:17:12.028 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.028 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.028 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.028 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:12.028 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.028 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.028 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.028 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.286 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:17:12.286 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:17:12.852 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.109 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:13.109 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.109 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.109 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.109 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.109 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:13.109 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:13.367 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:13.367 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.367 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.367 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:13.367 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:13.367 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.367 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.367 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.367 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.367 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.367 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.367 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.367 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.624 00:17:13.624 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.624 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.624 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.882 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.882 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.882 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.882 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.882 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.882 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.882 { 00:17:13.882 "cntlid": 125, 00:17:13.882 "qid": 0, 00:17:13.882 "state": "enabled", 00:17:13.882 "thread": "nvmf_tgt_poll_group_000", 00:17:13.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:13.882 "listen_address": { 00:17:13.882 "trtype": "TCP", 00:17:13.882 "adrfam": "IPv4", 00:17:13.882 "traddr": "10.0.0.3", 00:17:13.882 "trsvcid": "4420" 00:17:13.882 }, 00:17:13.882 "peer_address": { 00:17:13.882 "trtype": "TCP", 00:17:13.882 "adrfam": "IPv4", 00:17:13.882 "traddr": "10.0.0.1", 00:17:13.882 "trsvcid": "38486" 00:17:13.882 }, 00:17:13.882 "auth": { 00:17:13.882 "state": "completed", 00:17:13.882 "digest": "sha512", 00:17:13.882 "dhgroup": "ffdhe4096" 00:17:13.882 } 00:17:13.882 } 00:17:13.882 ]' 00:17:13.882 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.882 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.882 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.140 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:14.140 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.140 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.140 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.140 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.398 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:17:14.398 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:17:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:15.528 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:15.528 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.528 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:15.528 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:15.528 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:15.528 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.528 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:17:15.528 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.528 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.528 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.528 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:15.528 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.528 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.785 00:17:15.785 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.785 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.785 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.043 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.043 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.043 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.043 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.043 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.044 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.044 { 00:17:16.044 "cntlid": 127, 00:17:16.044 "qid": 0, 00:17:16.044 "state": "enabled", 00:17:16.044 "thread": "nvmf_tgt_poll_group_000", 00:17:16.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:16.044 "listen_address": { 00:17:16.044 "trtype": "TCP", 00:17:16.044 "adrfam": "IPv4", 00:17:16.044 "traddr": "10.0.0.3", 00:17:16.044 "trsvcid": "4420" 00:17:16.044 }, 00:17:16.044 "peer_address": { 00:17:16.044 "trtype": "TCP", 00:17:16.044 "adrfam": "IPv4", 00:17:16.044 "traddr": "10.0.0.1", 00:17:16.044 "trsvcid": "57256" 00:17:16.044 }, 00:17:16.044 "auth": { 00:17:16.044 "state": "completed", 00:17:16.044 "digest": "sha512", 00:17:16.044 "dhgroup": "ffdhe4096" 00:17:16.044 } 00:17:16.044 } 00:17:16.044 ]' 00:17:16.044 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.044 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.044 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.305 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:16.305 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.305 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.305 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.305 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.572 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:17:16.572 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:17:17.137 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.137 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:17.137 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.137 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.137 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.137 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.137 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.137 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:17.137 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:17.395 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:17.395 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.395 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:17.395 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:17.395 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:17.395 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.395 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.395 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.395 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.395 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.395 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.395 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.395 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.960 00:17:17.960 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.960 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.960 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.218 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.218 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.218 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.218 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.218 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.218 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.218 { 00:17:18.218 "cntlid": 129, 00:17:18.218 "qid": 0, 00:17:18.218 "state": "enabled", 00:17:18.218 "thread": "nvmf_tgt_poll_group_000", 00:17:18.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:18.218 "listen_address": { 00:17:18.218 "trtype": "TCP", 00:17:18.218 "adrfam": "IPv4", 00:17:18.218 "traddr": "10.0.0.3", 00:17:18.218 "trsvcid": "4420" 00:17:18.218 }, 00:17:18.218 "peer_address": { 00:17:18.218 "trtype": "TCP", 00:17:18.218 "adrfam": "IPv4", 00:17:18.218 "traddr": "10.0.0.1", 00:17:18.218 "trsvcid": "57270" 00:17:18.218 }, 00:17:18.218 "auth": { 00:17:18.218 "state": "completed", 00:17:18.218 "digest": "sha512", 00:17:18.218 "dhgroup": "ffdhe6144" 00:17:18.218 } 00:17:18.218 } 00:17:18.218 ]' 00:17:18.218 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.218 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.218 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.476 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.476 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.476 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.476 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.476 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.733 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:17:18.734 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:17:19.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:19.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.557 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:19.557 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.557 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:19.557 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:19.557 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:19.557 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.557 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.557 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.557 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.557 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.557 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.557 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.557 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.123 00:17:20.123 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.123 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.123 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.381 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.381 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.381 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.381 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.381 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.381 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.381 { 00:17:20.381 "cntlid": 131, 00:17:20.381 "qid": 0, 00:17:20.381 "state": "enabled", 00:17:20.381 "thread": "nvmf_tgt_poll_group_000", 00:17:20.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:20.381 "listen_address": { 00:17:20.381 "trtype": "TCP", 00:17:20.381 "adrfam": "IPv4", 00:17:20.381 "traddr": "10.0.0.3", 00:17:20.381 "trsvcid": "4420" 00:17:20.381 }, 00:17:20.381 "peer_address": { 00:17:20.381 "trtype": "TCP", 00:17:20.381 "adrfam": "IPv4", 00:17:20.381 "traddr": "10.0.0.1", 00:17:20.381 "trsvcid": "57290" 00:17:20.381 }, 00:17:20.381 "auth": { 00:17:20.381 "state": "completed", 00:17:20.381 "digest": "sha512", 00:17:20.381 "dhgroup": "ffdhe6144" 00:17:20.381 } 00:17:20.381 } 00:17:20.381 ]' 00:17:20.381 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.381 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.381 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.381 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:20.381 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.639 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.639 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.639 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.897 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:17:20.897 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:17:21.831 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.831 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:21.831 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.831 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.831 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.831 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.831 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:21.831 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:21.831 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:21.831 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.831 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.831 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:21.831 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:21.831 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.831 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.831 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.831 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.831 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.831 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.831 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.831 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.411 00:17:22.411 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.411 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.411 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.668 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.668 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.668 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.668 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.668 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.668 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.668 { 00:17:22.668 "cntlid": 133, 00:17:22.668 "qid": 0, 00:17:22.668 "state": "enabled", 00:17:22.668 "thread": "nvmf_tgt_poll_group_000", 00:17:22.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:22.668 "listen_address": { 00:17:22.668 "trtype": "TCP", 00:17:22.668 "adrfam": "IPv4", 00:17:22.668 "traddr": "10.0.0.3", 00:17:22.668 "trsvcid": "4420" 00:17:22.668 }, 00:17:22.668 "peer_address": { 00:17:22.668 "trtype": "TCP", 00:17:22.668 "adrfam": "IPv4", 00:17:22.668 "traddr": "10.0.0.1", 00:17:22.668 "trsvcid": "57314" 00:17:22.668 }, 00:17:22.668 "auth": { 00:17:22.668 "state": "completed", 00:17:22.668 "digest": "sha512", 00:17:22.668 "dhgroup": "ffdhe6144" 00:17:22.668 } 00:17:22.668 } 00:17:22.668 ]' 00:17:22.668 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.668 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.669 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.669 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:22.669 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.931 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.931 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.931 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.193 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:17:23.193 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.127 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.693 00:17:24.693 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.693 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.693 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.951 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.951 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.951 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.951 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.951 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.951 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.951 { 00:17:24.951 "cntlid": 135, 00:17:24.951 "qid": 0, 00:17:24.951 "state": "enabled", 00:17:24.951 "thread": "nvmf_tgt_poll_group_000", 00:17:24.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:24.951 "listen_address": { 00:17:24.951 "trtype": "TCP", 00:17:24.951 "adrfam": "IPv4", 00:17:24.951 "traddr": "10.0.0.3", 00:17:24.951 "trsvcid": "4420" 00:17:24.951 }, 00:17:24.951 "peer_address": { 00:17:24.951 "trtype": "TCP", 00:17:24.951 "adrfam": "IPv4", 00:17:24.951 "traddr": "10.0.0.1", 00:17:24.951 "trsvcid": "59894" 00:17:24.951 }, 00:17:24.951 "auth": { 00:17:24.951 "state": "completed", 00:17:24.951 "digest": "sha512", 00:17:24.951 "dhgroup": "ffdhe6144" 00:17:24.951 } 00:17:24.951 } 00:17:24.951 ]' 00:17:25.210 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.210 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.210 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.210 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:25.210 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.210 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.210 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.210 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.468 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:17:25.468 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:17:26.403 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.403 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:26.403 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.403 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.403 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.403 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.403 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.403 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:26.403 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:26.403 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:26.403 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.403 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:26.403 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:26.661 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:26.661 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.661 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.661 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.661 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.661 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.661 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.661 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.661 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.226 00:17:27.226 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.226 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.226 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.485 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.485 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.485 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.485 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.485 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.485 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.485 { 00:17:27.485 "cntlid": 137, 00:17:27.485 "qid": 0, 00:17:27.485 "state": "enabled", 00:17:27.485 "thread": "nvmf_tgt_poll_group_000", 00:17:27.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:27.485 "listen_address": { 00:17:27.485 "trtype": "TCP", 00:17:27.485 "adrfam": "IPv4", 00:17:27.485 "traddr": "10.0.0.3", 00:17:27.485 "trsvcid": "4420" 00:17:27.485 }, 00:17:27.485 "peer_address": { 00:17:27.485 "trtype": "TCP", 00:17:27.485 "adrfam": "IPv4", 00:17:27.485 "traddr": "10.0.0.1", 00:17:27.485 "trsvcid": "59922" 00:17:27.485 }, 00:17:27.485 "auth": { 00:17:27.485 "state": "completed", 00:17:27.485 "digest": "sha512", 00:17:27.485 "dhgroup": "ffdhe8192" 00:17:27.485 } 00:17:27.485 } 00:17:27.485 ]' 00:17:27.485 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.485 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.485 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.485 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:27.485 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.743 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.743 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.743 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.002 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:17:28.002 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:17:28.569 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.569 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:28.569 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.569 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.569 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.569 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.569 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:28.569 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:28.826 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:28.826 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.826 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:28.826 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:28.826 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:28.826 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.826 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.826 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.826 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.826 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.826 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.826 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.826 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.760 00:17:29.760 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.760 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.760 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.018 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.018 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.018 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.018 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.018 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.018 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.018 { 00:17:30.018 "cntlid": 139, 00:17:30.018 "qid": 0, 00:17:30.018 "state": "enabled", 00:17:30.018 "thread": "nvmf_tgt_poll_group_000", 00:17:30.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:30.018 "listen_address": { 00:17:30.018 "trtype": "TCP", 00:17:30.018 "adrfam": "IPv4", 00:17:30.018 "traddr": "10.0.0.3", 00:17:30.018 "trsvcid": "4420" 00:17:30.018 }, 00:17:30.018 "peer_address": { 00:17:30.018 "trtype": "TCP", 00:17:30.018 "adrfam": "IPv4", 00:17:30.018 "traddr": "10.0.0.1", 00:17:30.018 "trsvcid": "59952" 00:17:30.018 }, 00:17:30.018 "auth": { 00:17:30.018 "state": "completed", 00:17:30.018 "digest": "sha512", 00:17:30.018 "dhgroup": "ffdhe8192" 00:17:30.018 } 00:17:30.018 } 00:17:30.018 ]' 00:17:30.018 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.018 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.018 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.018 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.018 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.018 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.018 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.018 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.277 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:17:30.277 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: --dhchap-ctrl-secret DHHC-1:02:YWI1MTU2N2EzMTIzMWQyMDA4M2JlNDhhMDIwMjdmZWVhMDZmODhkNDk4ZTY5M2M1WfjcQQ==: 00:17:31.212 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.212 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:31.212 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.212 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.212 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.212 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.212 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.212 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.471 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:31.471 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.471 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.471 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:31.471 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:31.471 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.471 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.471 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.471 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.471 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.471 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.471 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.471 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.037 00:17:32.037 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.037 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.037 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.296 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.296 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.296 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.296 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.296 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.296 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.296 { 00:17:32.296 "cntlid": 141, 00:17:32.296 "qid": 0, 00:17:32.296 "state": "enabled", 00:17:32.296 "thread": "nvmf_tgt_poll_group_000", 00:17:32.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:32.296 "listen_address": { 00:17:32.296 "trtype": "TCP", 00:17:32.296 "adrfam": "IPv4", 00:17:32.296 "traddr": "10.0.0.3", 00:17:32.296 "trsvcid": "4420" 00:17:32.296 }, 00:17:32.296 "peer_address": { 00:17:32.296 "trtype": "TCP", 00:17:32.296 "adrfam": "IPv4", 00:17:32.296 "traddr": "10.0.0.1", 00:17:32.296 "trsvcid": "59976" 00:17:32.296 }, 00:17:32.296 "auth": { 00:17:32.296 "state": "completed", 00:17:32.296 "digest": "sha512", 00:17:32.296 "dhgroup": "ffdhe8192" 00:17:32.296 } 00:17:32.296 } 00:17:32.296 ]' 00:17:32.296 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.296 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.296 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.296 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.296 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.296 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.296 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.296 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.870 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:17:32.870 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:01:NTVlYjNiY2EzNzg2OGE0NmQwMWQ3YTYyYTJiNmQ3OTac+Ku5: 00:17:33.436 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.436 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:33.436 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.436 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.436 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.436 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.436 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.436 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.694 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:33.694 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.694 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.695 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:33.695 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:33.695 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.695 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:17:33.695 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.695 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.695 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.695 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:33.695 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.695 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.629 00:17:34.629 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.629 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.629 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.886 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.886 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.886 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.886 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.886 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.886 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.886 { 00:17:34.886 "cntlid": 143, 00:17:34.886 "qid": 0, 00:17:34.886 "state": "enabled", 00:17:34.886 "thread": "nvmf_tgt_poll_group_000", 00:17:34.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:34.886 "listen_address": { 00:17:34.886 "trtype": "TCP", 00:17:34.886 "adrfam": "IPv4", 00:17:34.886 "traddr": "10.0.0.3", 00:17:34.886 "trsvcid": "4420" 00:17:34.886 }, 00:17:34.886 "peer_address": { 00:17:34.886 "trtype": "TCP", 00:17:34.886 "adrfam": "IPv4", 00:17:34.886 "traddr": "10.0.0.1", 00:17:34.886 "trsvcid": "59988" 00:17:34.886 }, 00:17:34.886 "auth": { 00:17:34.886 "state": "completed", 00:17:34.886 "digest": "sha512", 00:17:34.886 "dhgroup": "ffdhe8192" 00:17:34.886 } 00:17:34.886 } 00:17:34.886 ]' 00:17:34.886 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.886 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.886 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.886 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.886 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.886 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.886 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.886 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.145 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:17:35.145 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:17:36.080 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.080 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:36.080 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.080 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.080 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.080 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:36.080 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:36.080 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:36.080 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:36.080 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:36.080 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:36.339 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:36.339 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.339 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.339 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:36.339 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:36.339 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.339 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.339 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.339 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.339 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.339 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.339 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.340 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.297 00:17:37.297 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.297 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.297 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.297 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.297 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.297 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.297 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.297 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.297 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.297 { 00:17:37.297 "cntlid": 145, 00:17:37.297 "qid": 0, 00:17:37.297 "state": "enabled", 00:17:37.297 "thread": "nvmf_tgt_poll_group_000", 00:17:37.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:37.297 "listen_address": { 00:17:37.297 "trtype": "TCP", 00:17:37.297 "adrfam": "IPv4", 00:17:37.297 "traddr": "10.0.0.3", 00:17:37.297 "trsvcid": "4420" 00:17:37.297 }, 00:17:37.297 "peer_address": { 00:17:37.297 "trtype": "TCP", 00:17:37.297 "adrfam": "IPv4", 00:17:37.297 "traddr": "10.0.0.1", 00:17:37.297 "trsvcid": "40818" 00:17:37.297 }, 00:17:37.297 "auth": { 00:17:37.297 "state": "completed", 00:17:37.297 "digest": "sha512", 00:17:37.297 "dhgroup": "ffdhe8192" 00:17:37.297 } 00:17:37.297 } 00:17:37.297 ]' 00:17:37.297 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.555 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.555 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.555 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.555 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.555 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.555 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.555 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.814 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:17:37.814 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:00:Njc0NGQzYzY3NDM3NGYzOWU5ZTRmNTllYWM1YzllNTZiNjIxOTQzOGIyZDNjNTM4gwwDvA==: --dhchap-ctrl-secret DHHC-1:03:MDE2ZmI4ZjIwYTZjODQwNGViNTk1Njg5ZmU0YThiMzY5NGJmMmViMTRiMTliZDg0YTU2ZDVkMTQ0YjRlM2UyOFyVQ/8=: 00:17:38.381 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.381 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:38.381 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.381 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.381 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.381 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 00:17:38.381 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.381 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.381 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.381 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:38.381 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:38.381 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:38.381 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:38.381 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.381 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:38.381 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.381 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:38.382 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:38.382 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:38.947 request: 00:17:38.947 { 00:17:38.947 "name": "nvme0", 00:17:38.947 "trtype": "tcp", 00:17:38.947 "traddr": "10.0.0.3", 00:17:38.947 "adrfam": "ipv4", 00:17:38.947 "trsvcid": "4420", 00:17:38.947 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:38.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:38.947 "prchk_reftag": false, 00:17:38.947 "prchk_guard": false, 00:17:38.947 "hdgst": false, 00:17:38.947 "ddgst": false, 00:17:38.947 "dhchap_key": "key2", 00:17:38.947 "allow_unrecognized_csi": false, 00:17:38.947 "method": "bdev_nvme_attach_controller", 00:17:38.947 "req_id": 1 00:17:38.947 } 00:17:38.947 Got JSON-RPC error response 00:17:38.947 response: 00:17:38.947 { 00:17:38.947 "code": -5, 00:17:38.947 "message": "Input/output error" 00:17:38.947 } 00:17:39.205 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:39.205 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:39.205 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.206 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.776 request: 00:17:39.776 { 00:17:39.776 "name": "nvme0", 00:17:39.776 "trtype": "tcp", 00:17:39.776 "traddr": "10.0.0.3", 00:17:39.776 "adrfam": "ipv4", 00:17:39.776 "trsvcid": "4420", 00:17:39.776 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:39.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:39.776 "prchk_reftag": false, 00:17:39.776 "prchk_guard": false, 00:17:39.776 "hdgst": false, 00:17:39.776 "ddgst": false, 00:17:39.776 "dhchap_key": "key1", 00:17:39.776 "dhchap_ctrlr_key": "ckey2", 00:17:39.776 "allow_unrecognized_csi": false, 00:17:39.776 "method": "bdev_nvme_attach_controller", 00:17:39.776 "req_id": 1 00:17:39.776 } 00:17:39.776 Got JSON-RPC error response 00:17:39.776 response: 00:17:39.776 { 00:17:39.776 "code": -5, 00:17:39.776 "message": "Input/output error" 00:17:39.776 } 00:17:39.776 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:39.776 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:39.776 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:39.776 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:39.776 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:39.776 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.776 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.776 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.776 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 00:17:39.776 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.777 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.777 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.777 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.777 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:39.777 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.777 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:39.777 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.777 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:39.777 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.777 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.777 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.777 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.365 request: 00:17:40.365 { 00:17:40.365 "name": "nvme0", 00:17:40.365 "trtype": "tcp", 00:17:40.365 "traddr": "10.0.0.3", 00:17:40.365 "adrfam": "ipv4", 00:17:40.365 "trsvcid": "4420", 00:17:40.365 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:40.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:40.365 "prchk_reftag": false, 00:17:40.365 "prchk_guard": false, 00:17:40.365 "hdgst": false, 00:17:40.365 "ddgst": false, 00:17:40.365 "dhchap_key": "key1", 00:17:40.365 "dhchap_ctrlr_key": "ckey1", 00:17:40.365 "allow_unrecognized_csi": false, 00:17:40.365 "method": "bdev_nvme_attach_controller", 00:17:40.365 "req_id": 1 00:17:40.365 } 00:17:40.365 Got JSON-RPC error response 00:17:40.365 response: 00:17:40.365 { 00:17:40.365 "code": -5, 00:17:40.365 "message": "Input/output error" 00:17:40.365 } 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67510 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 67510 ']' 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 67510 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67510 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:40.365 killing process with pid 67510 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67510' 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 67510 00:17:40.365 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 67510 00:17:40.626 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:40.626 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:40.626 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:40.626 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.626 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=70665 00:17:40.626 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:40.626 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 70665 00:17:40.626 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70665 ']' 00:17:40.626 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.626 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:40.626 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.626 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:40.626 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70665 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70665 ']' 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:42.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.001 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.259 null0 00:17:42.259 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.259 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:42.259 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Bv0 00:17:42.259 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.ycb ]] 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ycb 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.SNU 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Lna ]] 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Lna 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.aUt 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Lk1 ]] 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lk1 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Tdi 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.260 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.635 nvme0n1 00:17:43.635 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.635 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.635 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.635 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.635 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.635 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.635 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.635 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.635 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.635 { 00:17:43.635 "cntlid": 1, 00:17:43.635 "qid": 0, 00:17:43.635 "state": "enabled", 00:17:43.635 "thread": "nvmf_tgt_poll_group_000", 00:17:43.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:43.635 "listen_address": { 00:17:43.635 "trtype": "TCP", 00:17:43.635 "adrfam": "IPv4", 00:17:43.635 "traddr": "10.0.0.3", 00:17:43.635 "trsvcid": "4420" 00:17:43.635 }, 00:17:43.635 "peer_address": { 00:17:43.635 "trtype": "TCP", 00:17:43.635 "adrfam": "IPv4", 00:17:43.635 "traddr": "10.0.0.1", 00:17:43.635 "trsvcid": "40880" 00:17:43.635 }, 00:17:43.635 "auth": { 00:17:43.635 "state": "completed", 00:17:43.635 "digest": "sha512", 00:17:43.635 "dhgroup": "ffdhe8192" 00:17:43.635 } 00:17:43.635 } 00:17:43.635 ]' 00:17:43.635 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.893 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.893 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.893 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.893 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.893 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.893 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.893 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.152 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:17:44.152 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:17:44.719 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.977 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:44.977 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.977 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.977 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.977 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key3 00:17:44.977 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.977 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.977 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.977 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:44.977 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:45.235 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:45.235 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:45.235 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:45.235 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:45.235 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.235 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:45.235 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.235 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:45.235 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.235 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.495 request: 00:17:45.495 { 00:17:45.495 "name": "nvme0", 00:17:45.495 "trtype": "tcp", 00:17:45.495 "traddr": "10.0.0.3", 00:17:45.495 "adrfam": "ipv4", 00:17:45.495 "trsvcid": "4420", 00:17:45.495 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:45.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:45.495 "prchk_reftag": false, 00:17:45.495 "prchk_guard": false, 00:17:45.495 "hdgst": false, 00:17:45.495 "ddgst": false, 00:17:45.495 "dhchap_key": "key3", 00:17:45.495 "allow_unrecognized_csi": false, 00:17:45.495 "method": "bdev_nvme_attach_controller", 00:17:45.495 "req_id": 1 00:17:45.495 } 00:17:45.495 Got JSON-RPC error response 00:17:45.495 response: 00:17:45.495 { 00:17:45.495 "code": -5, 00:17:45.495 "message": "Input/output error" 00:17:45.495 } 00:17:45.495 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:45.495 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:45.495 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:45.495 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:45.495 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:45.495 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:45.495 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:45.495 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:45.757 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:45.757 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:45.757 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:45.757 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:45.757 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.757 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:45.757 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.757 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:45.757 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.757 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:46.018 request: 00:17:46.018 { 00:17:46.018 "name": "nvme0", 00:17:46.018 "trtype": "tcp", 00:17:46.018 "traddr": "10.0.0.3", 00:17:46.018 "adrfam": "ipv4", 00:17:46.018 "trsvcid": "4420", 00:17:46.018 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:46.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:46.018 "prchk_reftag": false, 00:17:46.018 "prchk_guard": false, 00:17:46.018 "hdgst": false, 00:17:46.018 "ddgst": false, 00:17:46.018 "dhchap_key": "key3", 00:17:46.018 "allow_unrecognized_csi": false, 00:17:46.018 "method": "bdev_nvme_attach_controller", 00:17:46.018 "req_id": 1 00:17:46.018 } 00:17:46.018 Got JSON-RPC error response 00:17:46.018 response: 00:17:46.018 { 00:17:46.018 "code": -5, 00:17:46.018 "message": "Input/output error" 00:17:46.018 } 00:17:46.018 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:46.018 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:46.018 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:46.018 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:46.018 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:46.018 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:46.018 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:46.018 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:46.018 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:46.018 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:46.283 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:46.283 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.283 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.283 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.283 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:46.283 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.283 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.283 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.283 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:46.283 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:46.283 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:46.283 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:46.283 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.283 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:46.283 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.283 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:46.283 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:46.283 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:46.871 request: 00:17:46.871 { 00:17:46.871 "name": "nvme0", 00:17:46.871 "trtype": "tcp", 00:17:46.871 "traddr": "10.0.0.3", 00:17:46.871 "adrfam": "ipv4", 00:17:46.871 "trsvcid": "4420", 00:17:46.871 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:46.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:46.871 "prchk_reftag": false, 00:17:46.871 "prchk_guard": false, 00:17:46.871 "hdgst": false, 00:17:46.871 "ddgst": false, 00:17:46.871 "dhchap_key": "key0", 00:17:46.871 "dhchap_ctrlr_key": "key1", 00:17:46.871 "allow_unrecognized_csi": false, 00:17:46.871 "method": "bdev_nvme_attach_controller", 00:17:46.871 "req_id": 1 00:17:46.871 } 00:17:46.871 Got JSON-RPC error response 00:17:46.871 response: 00:17:46.871 { 00:17:46.871 "code": -5, 00:17:46.871 "message": "Input/output error" 00:17:46.871 } 00:17:46.871 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:46.871 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:46.871 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:46.871 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:46.871 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:46.871 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:46.871 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:47.136 nvme0n1 00:17:47.136 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:47.136 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:47.136 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.395 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.395 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.395 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.654 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 00:17:47.654 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.654 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.912 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.912 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:47.912 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:47.912 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:48.847 nvme0n1 00:17:48.847 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:48.847 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.847 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:49.105 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.105 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:49.105 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.105 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.105 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.105 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:49.105 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.105 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:49.671 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.671 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:17:49.672 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --hostid 8f4e03b1-7080-439e-b116-202a2cecf6a1 -l 0 --dhchap-secret DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: --dhchap-ctrl-secret DHHC-1:03:YWNjNTI5NGM0MjM1MzQyZjRlYzkzNjNlZmQxNGQyMWYwMzkzNGI0ODBkMTQ4MDU2MmEwYWEzNjE0MzAwODUzZh9ziNk=: 00:17:50.238 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:50.238 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:50.238 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:50.238 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:50.238 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:50.238 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:50.238 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:50.238 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.238 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.497 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:50.497 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:50.497 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:50.497 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:50.497 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.497 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:50.497 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.497 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:50.497 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:50.497 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:51.431 request: 00:17:51.431 { 00:17:51.431 "name": "nvme0", 00:17:51.431 "trtype": "tcp", 00:17:51.431 "traddr": "10.0.0.3", 00:17:51.431 "adrfam": "ipv4", 00:17:51.431 "trsvcid": "4420", 00:17:51.431 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:51.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1", 00:17:51.431 "prchk_reftag": false, 00:17:51.431 "prchk_guard": false, 00:17:51.431 "hdgst": false, 00:17:51.431 "ddgst": false, 00:17:51.431 "dhchap_key": "key1", 00:17:51.431 "allow_unrecognized_csi": false, 00:17:51.431 "method": "bdev_nvme_attach_controller", 00:17:51.431 "req_id": 1 00:17:51.431 } 00:17:51.431 Got JSON-RPC error response 00:17:51.431 response: 00:17:51.431 { 00:17:51.431 "code": -5, 00:17:51.431 "message": "Input/output error" 00:17:51.431 } 00:17:51.431 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:51.431 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:51.431 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:51.431 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:51.431 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:51.431 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:51.431 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:52.396 nvme0n1 00:17:52.396 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:52.396 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.396 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:52.655 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.655 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.655 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.915 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:52.915 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.915 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.915 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.915 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:52.915 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:52.915 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:53.174 nvme0n1 00:17:53.432 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:53.432 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:53.432 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.690 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.690 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.690 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.948 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:53.948 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.948 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.948 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.948 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: '' 2s 00:17:53.948 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:53.948 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:53.948 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: 00:17:53.948 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:53.948 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:53.948 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:53.948 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: ]] 00:17:53.948 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NTZmNWYxNmU0ODZkZWU2YzhjOTZlODIyZjRlZjM1MzlzRMRv: 00:17:53.948 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:53.948 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:53.948 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: 2s 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: ]] 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YzZlYWVlNWYyODg1OTdiMDQ1OWUzNjI2NWE3ZDEyYjgyMzY4MTliZDY2MzQwZTcxz90Bag==: 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:55.850 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:58.395 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:58.395 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:17:58.395 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:17:58.395 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:17:58.395 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:17:58.395 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:17:58.395 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:17:58.395 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.395 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:58.395 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.395 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.395 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.395 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:58.395 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:58.395 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:58.962 nvme0n1 00:17:58.962 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:58.962 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.962 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.962 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.962 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:58.962 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:59.928 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:59.928 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:59.928 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.928 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.928 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:17:59.928 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.928 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.928 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.928 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:59.928 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:00.187 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:00.187 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:00.187 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.775 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.775 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:00.775 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.775 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.775 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.775 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:00.775 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:00.775 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:00.775 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:00.775 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.775 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:00.775 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.775 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:00.775 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:01.342 request: 00:18:01.342 { 00:18:01.342 "name": "nvme0", 00:18:01.342 "dhchap_key": "key1", 00:18:01.342 "dhchap_ctrlr_key": "key3", 00:18:01.342 "method": "bdev_nvme_set_keys", 00:18:01.342 "req_id": 1 00:18:01.342 } 00:18:01.342 Got JSON-RPC error response 00:18:01.342 response: 00:18:01.342 { 00:18:01.342 "code": -13, 00:18:01.342 "message": "Permission denied" 00:18:01.342 } 00:18:01.342 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:01.342 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:01.342 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:01.342 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:01.342 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:01.342 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.342 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:01.601 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:01.601 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:02.533 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:02.533 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:02.533 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.829 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:02.829 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:02.829 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.829 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.829 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.829 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:02.829 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:02.829 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:03.803 nvme0n1 00:18:03.803 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:03.803 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.803 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.803 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.803 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:03.803 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:03.803 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:03.803 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:03.803 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:03.803 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:03.803 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:03.803 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:03.803 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:04.738 request: 00:18:04.738 { 00:18:04.738 "name": "nvme0", 00:18:04.738 "dhchap_key": "key2", 00:18:04.738 "dhchap_ctrlr_key": "key0", 00:18:04.738 "method": "bdev_nvme_set_keys", 00:18:04.738 "req_id": 1 00:18:04.738 } 00:18:04.738 Got JSON-RPC error response 00:18:04.738 response: 00:18:04.738 { 00:18:04.738 "code": -13, 00:18:04.738 "message": "Permission denied" 00:18:04.738 } 00:18:04.738 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:04.738 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:04.738 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:04.738 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:04.738 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:04.738 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:04.738 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.738 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:04.738 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:06.115 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:06.115 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:06.115 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.115 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:06.115 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:06.115 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:06.115 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67542 00:18:06.115 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 67542 ']' 00:18:06.115 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 67542 00:18:06.115 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:06.115 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:06.115 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67542 00:18:06.115 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:06.115 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:06.115 killing process with pid 67542 00:18:06.115 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67542' 00:18:06.115 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 67542 00:18:06.115 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 67542 00:18:06.681 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:06.681 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:06.681 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:06.939 rmmod nvme_tcp 00:18:06.939 rmmod nvme_fabrics 00:18:06.939 rmmod nvme_keyring 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 70665 ']' 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 70665 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 70665 ']' 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 70665 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70665 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:06.939 killing process with pid 70665 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70665' 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 70665 00:18:06.939 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 70665 00:18:07.197 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:07.197 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:07.197 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:07.197 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:07.197 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:18:07.197 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:07.197 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:18:07.197 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:07.197 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:07.197 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:07.197 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:07.197 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:07.197 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:07.197 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:07.197 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:07.197 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:07.197 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:07.198 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:07.457 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:07.457 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:07.457 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:07.457 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:07.457 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:07.457 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.457 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.457 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.457 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:18:07.457 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Bv0 /tmp/spdk.key-sha256.SNU /tmp/spdk.key-sha384.aUt /tmp/spdk.key-sha512.Tdi /tmp/spdk.key-sha512.ycb /tmp/spdk.key-sha384.Lna /tmp/spdk.key-sha256.Lk1 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:18:07.457 00:18:07.457 real 3m19.976s 00:18:07.457 user 7m57.789s 00:18:07.457 sys 0m30.849s 00:18:07.457 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:07.457 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.457 ************************************ 00:18:07.457 END TEST nvmf_auth_target 00:18:07.457 ************************************ 00:18:07.457 11:29:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:07.457 11:29:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:07.457 11:29:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:07.457 11:29:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:07.457 11:29:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:07.457 ************************************ 00:18:07.457 START TEST nvmf_bdevio_no_huge 00:18:07.457 ************************************ 00:18:07.457 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:07.716 * Looking for test storage... 00:18:07.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:07.716 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:07.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.717 --rc genhtml_branch_coverage=1 00:18:07.717 --rc genhtml_function_coverage=1 00:18:07.717 --rc genhtml_legend=1 00:18:07.717 --rc geninfo_all_blocks=1 00:18:07.717 --rc geninfo_unexecuted_blocks=1 00:18:07.717 00:18:07.717 ' 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:07.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.717 --rc genhtml_branch_coverage=1 00:18:07.717 --rc genhtml_function_coverage=1 00:18:07.717 --rc genhtml_legend=1 00:18:07.717 --rc geninfo_all_blocks=1 00:18:07.717 --rc geninfo_unexecuted_blocks=1 00:18:07.717 00:18:07.717 ' 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:07.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.717 --rc genhtml_branch_coverage=1 00:18:07.717 --rc genhtml_function_coverage=1 00:18:07.717 --rc genhtml_legend=1 00:18:07.717 --rc geninfo_all_blocks=1 00:18:07.717 --rc geninfo_unexecuted_blocks=1 00:18:07.717 00:18:07.717 ' 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:07.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.717 --rc genhtml_branch_coverage=1 00:18:07.717 --rc genhtml_function_coverage=1 00:18:07.717 --rc genhtml_legend=1 00:18:07.717 --rc geninfo_all_blocks=1 00:18:07.717 --rc geninfo_unexecuted_blocks=1 00:18:07.717 00:18:07.717 ' 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:07.717 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@458 -- # nvmf_veth_init 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.717 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:07.718 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:07.718 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:07.718 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:07.718 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:07.718 Cannot find device "nvmf_init_br" 00:18:07.718 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:18:07.718 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:07.718 Cannot find device "nvmf_init_br2" 00:18:07.718 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:18:07.718 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:07.718 Cannot find device "nvmf_tgt_br" 00:18:07.718 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:18:07.718 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:07.718 Cannot find device "nvmf_tgt_br2" 00:18:07.718 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:18:07.718 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:07.718 Cannot find device "nvmf_init_br" 00:18:07.718 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:18:07.718 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:07.718 Cannot find device "nvmf_init_br2" 00:18:07.718 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:18:07.718 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:07.977 Cannot find device "nvmf_tgt_br" 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:07.977 Cannot find device "nvmf_tgt_br2" 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:07.977 Cannot find device "nvmf_br" 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:07.977 Cannot find device "nvmf_init_if" 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:07.977 Cannot find device "nvmf_init_if2" 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:07.977 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:07.977 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:07.977 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:07.977 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:18:07.977 00:18:07.977 --- 10.0.0.3 ping statistics --- 00:18:07.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.977 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:07.977 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:07.977 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:18:07.977 00:18:07.977 --- 10.0.0.4 ping statistics --- 00:18:07.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.977 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:18:07.977 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:08.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:08.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:18:08.235 00:18:08.235 --- 10.0.0.1 ping statistics --- 00:18:08.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.235 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:08.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:08.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:18:08.235 00:18:08.235 --- 10.0.0.2 ping statistics --- 00:18:08.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.235 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # return 0 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=71342 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 71342 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 71342 ']' 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:08.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:08.235 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.235 [2024-10-07 11:29:03.614159] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:18:08.235 [2024-10-07 11:29:03.614264] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:08.494 [2024-10-07 11:29:03.767253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:08.494 [2024-10-07 11:29:03.918098] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.494 [2024-10-07 11:29:03.918175] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.494 [2024-10-07 11:29:03.918190] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.494 [2024-10-07 11:29:03.918200] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.494 [2024-10-07 11:29:03.918209] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.494 [2024-10-07 11:29:03.919119] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:18:08.494 [2024-10-07 11:29:03.919222] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:18:08.494 [2024-10-07 11:29:03.919274] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:18:08.494 [2024-10-07 11:29:03.919282] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:18:08.494 [2024-10-07 11:29:03.925679] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:09.463 [2024-10-07 11:29:04.672216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:09.463 Malloc0 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:09.463 [2024-10-07 11:29:04.712592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:09.463 { 00:18:09.463 "params": { 00:18:09.463 "name": "Nvme$subsystem", 00:18:09.463 "trtype": "$TEST_TRANSPORT", 00:18:09.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:09.463 "adrfam": "ipv4", 00:18:09.463 "trsvcid": "$NVMF_PORT", 00:18:09.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:09.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:09.463 "hdgst": ${hdgst:-false}, 00:18:09.463 "ddgst": ${ddgst:-false} 00:18:09.463 }, 00:18:09.463 "method": "bdev_nvme_attach_controller" 00:18:09.463 } 00:18:09.463 EOF 00:18:09.463 )") 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:18:09.463 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:09.463 "params": { 00:18:09.463 "name": "Nvme1", 00:18:09.464 "trtype": "tcp", 00:18:09.464 "traddr": "10.0.0.3", 00:18:09.464 "adrfam": "ipv4", 00:18:09.464 "trsvcid": "4420", 00:18:09.464 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.464 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.464 "hdgst": false, 00:18:09.464 "ddgst": false 00:18:09.464 }, 00:18:09.464 "method": "bdev_nvme_attach_controller" 00:18:09.464 }' 00:18:09.464 [2024-10-07 11:29:04.763502] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:18:09.464 [2024-10-07 11:29:04.763591] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71378 ] 00:18:09.464 [2024-10-07 11:29:04.900746] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:09.723 [2024-10-07 11:29:05.026409] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.723 [2024-10-07 11:29:05.026476] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.723 [2024-10-07 11:29:05.026479] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.723 [2024-10-07 11:29:05.040233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:09.984 I/O targets: 00:18:09.984 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:09.984 00:18:09.984 00:18:09.984 CUnit - A unit testing framework for C - Version 2.1-3 00:18:09.984 http://cunit.sourceforge.net/ 00:18:09.984 00:18:09.984 00:18:09.984 Suite: bdevio tests on: Nvme1n1 00:18:09.984 Test: blockdev write read block ...passed 00:18:09.984 Test: blockdev write zeroes read block ...passed 00:18:09.984 Test: blockdev write zeroes read no split ...passed 00:18:09.984 Test: blockdev write zeroes read split ...passed 00:18:09.984 Test: blockdev write zeroes read split partial ...passed 00:18:09.984 Test: blockdev reset ...[2024-10-07 11:29:05.298620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:09.984 [2024-10-07 11:29:05.298742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2209720 (9): Bad file descriptor 00:18:09.984 [2024-10-07 11:29:05.315027] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:09.984 passed 00:18:09.984 Test: blockdev write read 8 blocks ...passed 00:18:09.984 Test: blockdev write read size > 128k ...passed 00:18:09.984 Test: blockdev write read invalid size ...passed 00:18:09.984 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:09.984 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:09.984 Test: blockdev write read max offset ...passed 00:18:09.984 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:09.984 Test: blockdev writev readv 8 blocks ...passed 00:18:09.984 Test: blockdev writev readv 30 x 1block ...passed 00:18:09.984 Test: blockdev writev readv block ...passed 00:18:09.984 Test: blockdev writev readv size > 128k ...passed 00:18:09.984 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:09.984 Test: blockdev comparev and writev ...[2024-10-07 11:29:05.323774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.984 [2024-10-07 11:29:05.323938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.984 [2024-10-07 11:29:05.324050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.984 [2024-10-07 11:29:05.324164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.984 [2024-10-07 11:29:05.324556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.984 [2024-10-07 11:29:05.324688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:09.984 [2024-10-07 11:29:05.324785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.984 [2024-10-07 11:29:05.324880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:09.984 [2024-10-07 11:29:05.325544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.984 [2024-10-07 11:29:05.325658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:09.984 [2024-10-07 11:29:05.325788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.984 [2024-10-07 11:29:05.325865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:09.984 [2024-10-07 11:29:05.326405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.984 [2024-10-07 11:29:05.326538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:09.984 [2024-10-07 11:29:05.326633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.984 [2024-10-07 11:29:05.326719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:09.984 passed 00:18:09.984 Test: blockdev nvme passthru rw ...passed 00:18:09.984 Test: blockdev nvme passthru vendor specific ...[2024-10-07 11:29:05.327671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:09.984 [2024-10-07 11:29:05.327794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:09.984 [2024-10-07 11:29:05.328023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:09.984 [2024-10-07 11:29:05.328144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:09.984 [2024-10-07 11:29:05.328361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:09.984 [2024-10-07 11:29:05.328480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:09.984 [2024-10-07 11:29:05.328706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:09.984 [2024-10-07 11:29:05.328809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:09.984 passed 00:18:09.984 Test: blockdev nvme admin passthru ...passed 00:18:09.984 Test: blockdev copy ...passed 00:18:09.984 00:18:09.984 Run Summary: Type Total Ran Passed Failed Inactive 00:18:09.984 suites 1 1 n/a 0 0 00:18:09.984 tests 23 23 23 0 0 00:18:09.984 asserts 152 152 152 0 n/a 00:18:09.984 00:18:09.984 Elapsed time = 0.183 seconds 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:10.548 rmmod nvme_tcp 00:18:10.548 rmmod nvme_fabrics 00:18:10.548 rmmod nvme_keyring 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 71342 ']' 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 71342 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 71342 ']' 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 71342 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71342 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71342' 00:18:10.548 killing process with pid 71342 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 71342 00:18:10.548 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 71342 00:18:11.113 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:11.113 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:11.113 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:11.113 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:11.113 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:11.113 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:18:11.113 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:18:11.113 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:11.113 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.114 ************************************ 00:18:11.114 END TEST nvmf_bdevio_no_huge 00:18:11.114 ************************************ 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:18:11.114 00:18:11.114 real 0m3.662s 00:18:11.114 user 0m11.222s 00:18:11.114 sys 0m1.569s 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:11.114 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:11.377 ************************************ 00:18:11.377 START TEST nvmf_tls 00:18:11.377 ************************************ 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:11.377 * Looking for test storage... 00:18:11.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:11.377 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:11.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.378 --rc genhtml_branch_coverage=1 00:18:11.378 --rc genhtml_function_coverage=1 00:18:11.378 --rc genhtml_legend=1 00:18:11.378 --rc geninfo_all_blocks=1 00:18:11.378 --rc geninfo_unexecuted_blocks=1 00:18:11.378 00:18:11.378 ' 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:11.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.378 --rc genhtml_branch_coverage=1 00:18:11.378 --rc genhtml_function_coverage=1 00:18:11.378 --rc genhtml_legend=1 00:18:11.378 --rc geninfo_all_blocks=1 00:18:11.378 --rc geninfo_unexecuted_blocks=1 00:18:11.378 00:18:11.378 ' 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:11.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.378 --rc genhtml_branch_coverage=1 00:18:11.378 --rc genhtml_function_coverage=1 00:18:11.378 --rc genhtml_legend=1 00:18:11.378 --rc geninfo_all_blocks=1 00:18:11.378 --rc geninfo_unexecuted_blocks=1 00:18:11.378 00:18:11.378 ' 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:11.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.378 --rc genhtml_branch_coverage=1 00:18:11.378 --rc genhtml_function_coverage=1 00:18:11.378 --rc genhtml_legend=1 00:18:11.378 --rc geninfo_all_blocks=1 00:18:11.378 --rc geninfo_unexecuted_blocks=1 00:18:11.378 00:18:11.378 ' 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:11.378 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@458 -- # nvmf_veth_init 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:11.378 Cannot find device "nvmf_init_br" 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:11.378 Cannot find device "nvmf_init_br2" 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:11.378 Cannot find device "nvmf_tgt_br" 00:18:11.378 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:18:11.379 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:11.669 Cannot find device "nvmf_tgt_br2" 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:11.669 Cannot find device "nvmf_init_br" 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:11.669 Cannot find device "nvmf_init_br2" 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:11.669 Cannot find device "nvmf_tgt_br" 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:11.669 Cannot find device "nvmf_tgt_br2" 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:11.669 Cannot find device "nvmf_br" 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:11.669 Cannot find device "nvmf_init_if" 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:11.669 Cannot find device "nvmf_init_if2" 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:11.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:11.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:11.669 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:11.669 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:11.928 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:11.928 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:18:11.928 00:18:11.928 --- 10.0.0.3 ping statistics --- 00:18:11.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.928 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:11.928 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:11.928 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:18:11.928 00:18:11.928 --- 10.0.0.4 ping statistics --- 00:18:11.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.928 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:11.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:18:11.928 00:18:11.928 --- 10.0.0.1 ping statistics --- 00:18:11.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.928 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:11.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:18:11.928 00:18:11.928 --- 10.0.0.2 ping statistics --- 00:18:11.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.928 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # return 0 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=71617 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 71617 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71617 ']' 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:11.928 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.928 [2024-10-07 11:29:07.357777] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:18:11.928 [2024-10-07 11:29:07.357878] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.187 [2024-10-07 11:29:07.504074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.187 [2024-10-07 11:29:07.621080] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.187 [2024-10-07 11:29:07.621146] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.187 [2024-10-07 11:29:07.621159] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.187 [2024-10-07 11:29:07.621170] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.187 [2024-10-07 11:29:07.621179] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.187 [2024-10-07 11:29:07.621661] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.119 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.119 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:13.119 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:13.119 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:13.119 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.119 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.119 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:13.119 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:13.119 true 00:18:13.119 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:13.119 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:13.684 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:13.684 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:13.684 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:13.684 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:13.684 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:13.941 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:13.941 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:13.941 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:14.511 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:14.511 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:14.511 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:14.511 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:14.511 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:14.511 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:14.781 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:14.781 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:14.781 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:15.039 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:15.039 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:15.297 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:15.297 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:15.297 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:15.554 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:15.554 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:15.812 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:15.812 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:15.812 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:15.812 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:15.812 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:15.812 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:15.812 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:18:15.812 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:18:15.812 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:16.070 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:16.070 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:16.070 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:16.070 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:16.070 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:16.070 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:18:16.070 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:18:16.070 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:16.070 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:16.070 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:16.070 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.iDGeA6lLxk 00:18:16.070 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:16.070 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.EsKs7mHbZl 00:18:16.070 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:16.070 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:16.070 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.iDGeA6lLxk 00:18:16.070 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.EsKs7mHbZl 00:18:16.070 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:16.328 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:16.585 [2024-10-07 11:29:12.076573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:16.843 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.iDGeA6lLxk 00:18:16.843 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.iDGeA6lLxk 00:18:16.843 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:17.101 [2024-10-07 11:29:12.380157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.101 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:17.358 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:17.616 [2024-10-07 11:29:12.940276] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:17.616 [2024-10-07 11:29:12.940559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:17.616 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:17.873 malloc0 00:18:17.873 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:18.132 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.iDGeA6lLxk 00:18:18.390 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:18.647 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.iDGeA6lLxk 00:18:30.862 Initializing NVMe Controllers 00:18:30.863 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:30.863 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:30.863 Initialization complete. Launching workers. 00:18:30.863 ======================================================== 00:18:30.863 Latency(us) 00:18:30.863 Device Information : IOPS MiB/s Average min max 00:18:30.863 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9527.59 37.22 6719.08 1184.97 12549.88 00:18:30.863 ======================================================== 00:18:30.863 Total : 9527.59 37.22 6719.08 1184.97 12549.88 00:18:30.863 00:18:30.863 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iDGeA6lLxk 00:18:30.863 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:30.863 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:30.863 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:30.863 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iDGeA6lLxk 00:18:30.863 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:30.863 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71866 00:18:30.863 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:30.863 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:30.863 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71866 /var/tmp/bdevperf.sock 00:18:30.863 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71866 ']' 00:18:30.863 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.863 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:30.863 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.863 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:30.863 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.863 [2024-10-07 11:29:24.321457] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:18:30.863 [2024-10-07 11:29:24.321579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71866 ] 00:18:30.863 [2024-10-07 11:29:24.462926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.863 [2024-10-07 11:29:24.590893] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.863 [2024-10-07 11:29:24.646010] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:30.863 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:30.863 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:30.863 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iDGeA6lLxk 00:18:30.863 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.863 [2024-10-07 11:29:25.926636] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.863 TLSTESTn1 00:18:30.863 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:30.863 Running I/O for 10 seconds... 00:18:32.767 3968.00 IOPS, 15.50 MiB/s [2024-10-07T11:29:29.225Z] 3982.50 IOPS, 15.56 MiB/s [2024-10-07T11:29:30.159Z] 4051.33 IOPS, 15.83 MiB/s [2024-10-07T11:29:31.531Z] 4068.75 IOPS, 15.89 MiB/s [2024-10-07T11:29:32.460Z] 4068.00 IOPS, 15.89 MiB/s [2024-10-07T11:29:33.391Z] 4073.00 IOPS, 15.91 MiB/s [2024-10-07T11:29:34.322Z] 4076.43 IOPS, 15.92 MiB/s [2024-10-07T11:29:35.269Z] 4077.75 IOPS, 15.93 MiB/s [2024-10-07T11:29:36.202Z] 4089.00 IOPS, 15.97 MiB/s [2024-10-07T11:29:36.202Z] 4096.10 IOPS, 16.00 MiB/s 00:18:40.679 Latency(us) 00:18:40.679 [2024-10-07T11:29:36.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.679 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:40.679 Verification LBA range: start 0x0 length 0x2000 00:18:40.679 TLSTESTn1 : 10.02 4101.42 16.02 0.00 0.00 31151.37 5719.51 23473.80 00:18:40.679 [2024-10-07T11:29:36.202Z] =================================================================================================================== 00:18:40.679 [2024-10-07T11:29:36.202Z] Total : 4101.42 16.02 0.00 0.00 31151.37 5719.51 23473.80 00:18:40.679 { 00:18:40.679 "results": [ 00:18:40.679 { 00:18:40.679 "job": "TLSTESTn1", 00:18:40.679 "core_mask": "0x4", 00:18:40.679 "workload": "verify", 00:18:40.679 "status": "finished", 00:18:40.679 "verify_range": { 00:18:40.679 "start": 0, 00:18:40.679 "length": 8192 00:18:40.679 }, 00:18:40.679 "queue_depth": 128, 00:18:40.679 "io_size": 4096, 00:18:40.679 "runtime": 10.016777, 00:18:40.679 "iops": 4101.419049261055, 00:18:40.679 "mibps": 16.021168161175996, 00:18:40.679 "io_failed": 0, 00:18:40.679 "io_timeout": 0, 00:18:40.679 "avg_latency_us": 31151.368811607543, 00:18:40.679 "min_latency_us": 5719.505454545455, 00:18:40.679 "max_latency_us": 23473.803636363635 00:18:40.679 } 00:18:40.679 ], 00:18:40.679 "core_count": 1 00:18:40.679 } 00:18:40.679 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:40.679 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71866 00:18:40.679 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71866 ']' 00:18:40.679 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71866 00:18:40.679 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:40.679 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:40.679 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71866 00:18:40.936 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:40.936 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:40.936 killing process with pid 71866 00:18:40.936 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71866' 00:18:40.936 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71866 00:18:40.937 Received shutdown signal, test time was about 10.000000 seconds 00:18:40.937 00:18:40.937 Latency(us) 00:18:40.937 [2024-10-07T11:29:36.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.937 [2024-10-07T11:29:36.460Z] =================================================================================================================== 00:18:40.937 [2024-10-07T11:29:36.460Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71866 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EsKs7mHbZl 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EsKs7mHbZl 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EsKs7mHbZl 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EsKs7mHbZl 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72001 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72001 /var/tmp/bdevperf.sock 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72001 ']' 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:40.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:40.937 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.194 [2024-10-07 11:29:36.497640] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:18:41.194 [2024-10-07 11:29:36.497744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72001 ] 00:18:41.194 [2024-10-07 11:29:36.635900] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.451 [2024-10-07 11:29:36.747969] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.451 [2024-10-07 11:29:36.800190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:41.451 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:41.451 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:41.451 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EsKs7mHbZl 00:18:41.709 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:41.966 [2024-10-07 11:29:37.391694] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.966 [2024-10-07 11:29:37.396867] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:41.966 [2024-10-07 11:29:37.397440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe42090 (107): Transport endpoint is not connected 00:18:41.966 [2024-10-07 11:29:37.398428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe42090 (9): Bad file descriptor 00:18:41.966 [2024-10-07 11:29:37.399423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:41.966 [2024-10-07 11:29:37.399447] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:41.966 [2024-10-07 11:29:37.399459] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:41.966 [2024-10-07 11:29:37.399470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:41.966 request: 00:18:41.966 { 00:18:41.966 "name": "TLSTEST", 00:18:41.966 "trtype": "tcp", 00:18:41.966 "traddr": "10.0.0.3", 00:18:41.966 "adrfam": "ipv4", 00:18:41.966 "trsvcid": "4420", 00:18:41.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.966 "prchk_reftag": false, 00:18:41.966 "prchk_guard": false, 00:18:41.966 "hdgst": false, 00:18:41.966 "ddgst": false, 00:18:41.966 "psk": "key0", 00:18:41.966 "allow_unrecognized_csi": false, 00:18:41.966 "method": "bdev_nvme_attach_controller", 00:18:41.966 "req_id": 1 00:18:41.966 } 00:18:41.966 Got JSON-RPC error response 00:18:41.966 response: 00:18:41.966 { 00:18:41.966 "code": -5, 00:18:41.966 "message": "Input/output error" 00:18:41.966 } 00:18:41.966 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72001 00:18:41.966 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72001 ']' 00:18:41.966 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72001 00:18:41.966 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:41.966 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:41.966 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72001 00:18:41.966 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:41.966 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:41.966 killing process with pid 72001 00:18:41.966 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72001' 00:18:41.966 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72001 00:18:41.966 Received shutdown signal, test time was about 10.000000 seconds 00:18:41.966 00:18:41.966 Latency(us) 00:18:41.966 [2024-10-07T11:29:37.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.966 [2024-10-07T11:29:37.489Z] =================================================================================================================== 00:18:41.966 [2024-10-07T11:29:37.489Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:41.966 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72001 00:18:42.223 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:42.223 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:42.223 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:42.223 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:42.223 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:42.223 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iDGeA6lLxk 00:18:42.223 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:42.223 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iDGeA6lLxk 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iDGeA6lLxk 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iDGeA6lLxk 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72022 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72022 /var/tmp/bdevperf.sock 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72022 ']' 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:42.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:42.224 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.224 [2024-10-07 11:29:37.727305] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:18:42.224 [2024-10-07 11:29:37.727439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72022 ] 00:18:42.482 [2024-10-07 11:29:37.862813] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.482 [2024-10-07 11:29:37.967402] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.740 [2024-10-07 11:29:38.022024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:43.304 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:43.304 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:43.304 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iDGeA6lLxk 00:18:43.561 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:43.819 [2024-10-07 11:29:39.241381] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.819 [2024-10-07 11:29:39.246845] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:43.819 [2024-10-07 11:29:39.246891] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:43.819 [2024-10-07 11:29:39.246956] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:43.819 [2024-10-07 11:29:39.247046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1b090 (107): Transport endpoint is not connected 00:18:43.819 [2024-10-07 11:29:39.248033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1b090 (9): Bad file descriptor 00:18:43.819 [2024-10-07 11:29:39.249030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:43.819 [2024-10-07 11:29:39.249053] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:43.819 [2024-10-07 11:29:39.249064] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:43.819 [2024-10-07 11:29:39.249075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:43.819 request: 00:18:43.819 { 00:18:43.819 "name": "TLSTEST", 00:18:43.819 "trtype": "tcp", 00:18:43.819 "traddr": "10.0.0.3", 00:18:43.819 "adrfam": "ipv4", 00:18:43.819 "trsvcid": "4420", 00:18:43.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.819 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:43.819 "prchk_reftag": false, 00:18:43.819 "prchk_guard": false, 00:18:43.819 "hdgst": false, 00:18:43.819 "ddgst": false, 00:18:43.819 "psk": "key0", 00:18:43.820 "allow_unrecognized_csi": false, 00:18:43.820 "method": "bdev_nvme_attach_controller", 00:18:43.820 "req_id": 1 00:18:43.820 } 00:18:43.820 Got JSON-RPC error response 00:18:43.820 response: 00:18:43.820 { 00:18:43.820 "code": -5, 00:18:43.820 "message": "Input/output error" 00:18:43.820 } 00:18:43.820 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72022 00:18:43.820 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72022 ']' 00:18:43.820 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72022 00:18:43.820 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:43.820 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:43.820 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72022 00:18:43.820 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:43.820 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:43.820 killing process with pid 72022 00:18:43.820 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72022' 00:18:43.820 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72022 00:18:43.820 Received shutdown signal, test time was about 10.000000 seconds 00:18:43.820 00:18:43.820 Latency(us) 00:18:43.820 [2024-10-07T11:29:39.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.820 [2024-10-07T11:29:39.343Z] =================================================================================================================== 00:18:43.820 [2024-10-07T11:29:39.343Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:43.820 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72022 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iDGeA6lLxk 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iDGeA6lLxk 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iDGeA6lLxk 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iDGeA6lLxk 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72056 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72056 /var/tmp/bdevperf.sock 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72056 ']' 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.078 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.078 [2024-10-07 11:29:39.577558] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:18:44.078 [2024-10-07 11:29:39.577674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72056 ] 00:18:44.335 [2024-10-07 11:29:39.719496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.335 [2024-10-07 11:29:39.823500] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.592 [2024-10-07 11:29:39.877563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:45.156 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:45.156 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:45.156 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iDGeA6lLxk 00:18:45.413 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:45.670 [2024-10-07 11:29:41.117004] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:45.670 [2024-10-07 11:29:41.127064] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:45.670 [2024-10-07 11:29:41.127111] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:45.670 [2024-10-07 11:29:41.127160] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:45.670 [2024-10-07 11:29:41.127650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef4090 (107): Transport endpoint is not connected 00:18:45.670 [2024-10-07 11:29:41.128642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef4090 (9): Bad file descriptor 00:18:45.670 [2024-10-07 11:29:41.129638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:45.670 [2024-10-07 11:29:41.129663] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:45.670 [2024-10-07 11:29:41.129674] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:45.670 [2024-10-07 11:29:41.129686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:45.670 request: 00:18:45.670 { 00:18:45.670 "name": "TLSTEST", 00:18:45.670 "trtype": "tcp", 00:18:45.670 "traddr": "10.0.0.3", 00:18:45.670 "adrfam": "ipv4", 00:18:45.670 "trsvcid": "4420", 00:18:45.670 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:45.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.670 "prchk_reftag": false, 00:18:45.670 "prchk_guard": false, 00:18:45.670 "hdgst": false, 00:18:45.670 "ddgst": false, 00:18:45.670 "psk": "key0", 00:18:45.670 "allow_unrecognized_csi": false, 00:18:45.670 "method": "bdev_nvme_attach_controller", 00:18:45.670 "req_id": 1 00:18:45.670 } 00:18:45.670 Got JSON-RPC error response 00:18:45.670 response: 00:18:45.671 { 00:18:45.671 "code": -5, 00:18:45.671 "message": "Input/output error" 00:18:45.671 } 00:18:45.671 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72056 00:18:45.671 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72056 ']' 00:18:45.671 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72056 00:18:45.671 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:45.671 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:45.671 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72056 00:18:45.671 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:45.671 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:45.671 killing process with pid 72056 00:18:45.671 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72056' 00:18:45.671 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72056 00:18:45.671 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.671 00:18:45.671 Latency(us) 00:18:45.671 [2024-10-07T11:29:41.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.671 [2024-10-07T11:29:41.194Z] =================================================================================================================== 00:18:45.671 [2024-10-07T11:29:41.194Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:45.671 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72056 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72090 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72090 /var/tmp/bdevperf.sock 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72090 ']' 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:45.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:45.928 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.188 [2024-10-07 11:29:41.459673] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:18:46.189 [2024-10-07 11:29:41.459797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72090 ] 00:18:46.189 [2024-10-07 11:29:41.599020] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.189 [2024-10-07 11:29:41.710340] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.451 [2024-10-07 11:29:41.764952] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:46.451 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:46.451 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:46.451 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:46.708 [2024-10-07 11:29:42.068216] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:46.708 [2024-10-07 11:29:42.068278] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:46.708 request: 00:18:46.708 { 00:18:46.708 "name": "key0", 00:18:46.708 "path": "", 00:18:46.708 "method": "keyring_file_add_key", 00:18:46.708 "req_id": 1 00:18:46.708 } 00:18:46.708 Got JSON-RPC error response 00:18:46.708 response: 00:18:46.708 { 00:18:46.708 "code": -1, 00:18:46.708 "message": "Operation not permitted" 00:18:46.708 } 00:18:46.708 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:46.966 [2024-10-07 11:29:42.388395] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:46.966 [2024-10-07 11:29:42.388458] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:46.966 request: 00:18:46.966 { 00:18:46.966 "name": "TLSTEST", 00:18:46.966 "trtype": "tcp", 00:18:46.966 "traddr": "10.0.0.3", 00:18:46.966 "adrfam": "ipv4", 00:18:46.966 "trsvcid": "4420", 00:18:46.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.966 "prchk_reftag": false, 00:18:46.966 "prchk_guard": false, 00:18:46.966 "hdgst": false, 00:18:46.966 "ddgst": false, 00:18:46.966 "psk": "key0", 00:18:46.966 "allow_unrecognized_csi": false, 00:18:46.966 "method": "bdev_nvme_attach_controller", 00:18:46.966 "req_id": 1 00:18:46.966 } 00:18:46.966 Got JSON-RPC error response 00:18:46.966 response: 00:18:46.966 { 00:18:46.966 "code": -126, 00:18:46.966 "message": "Required key not available" 00:18:46.966 } 00:18:46.966 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72090 00:18:46.966 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72090 ']' 00:18:46.966 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72090 00:18:46.966 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:46.966 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:46.966 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72090 00:18:46.966 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:46.966 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:46.966 killing process with pid 72090 00:18:46.966 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72090' 00:18:46.966 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72090 00:18:46.966 Received shutdown signal, test time was about 10.000000 seconds 00:18:46.966 00:18:46.966 Latency(us) 00:18:46.966 [2024-10-07T11:29:42.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.966 [2024-10-07T11:29:42.489Z] =================================================================================================================== 00:18:46.966 [2024-10-07T11:29:42.489Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:46.966 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72090 00:18:47.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:47.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:47.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:47.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:47.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:47.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71617 00:18:47.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71617 ']' 00:18:47.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71617 00:18:47.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:47.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:47.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71617 00:18:47.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:47.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:47.223 killing process with pid 71617 00:18:47.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71617' 00:18:47.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71617 00:18:47.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71617 00:18:47.480 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:47.480 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:47.480 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:47.480 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:47.480 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:47.480 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:18:47.480 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:47.737 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:47.737 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:47.737 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.KMgiJFx5lT 00:18:47.737 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:47.737 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.KMgiJFx5lT 00:18:47.737 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:47.737 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:47.737 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:47.737 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.737 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72127 00:18:47.737 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:47.737 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72127 00:18:47.737 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72127 ']' 00:18:47.737 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.737 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:47.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.737 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.737 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:47.737 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.737 [2024-10-07 11:29:43.085113] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:18:47.737 [2024-10-07 11:29:43.085929] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.737 [2024-10-07 11:29:43.229329] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.995 [2024-10-07 11:29:43.346102] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.995 [2024-10-07 11:29:43.346190] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.995 [2024-10-07 11:29:43.346218] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.995 [2024-10-07 11:29:43.346226] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.995 [2024-10-07 11:29:43.346234] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.995 [2024-10-07 11:29:43.346673] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.995 [2024-10-07 11:29:43.405967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:48.926 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:48.926 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:48.926 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:48.926 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:48.926 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.926 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.926 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.KMgiJFx5lT 00:18:48.926 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KMgiJFx5lT 00:18:48.926 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:49.199 [2024-10-07 11:29:44.454073] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.199 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:49.456 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:49.713 [2024-10-07 11:29:45.022186] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:49.713 [2024-10-07 11:29:45.022510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:49.713 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:49.971 malloc0 00:18:49.971 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:50.229 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KMgiJFx5lT 00:18:50.487 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:51.051 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KMgiJFx5lT 00:18:51.051 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:51.051 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:51.051 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:51.051 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KMgiJFx5lT 00:18:51.051 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:51.051 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72188 00:18:51.051 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:51.051 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:51.051 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72188 /var/tmp/bdevperf.sock 00:18:51.051 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72188 ']' 00:18:51.051 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.051 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:51.051 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.051 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:51.051 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.051 [2024-10-07 11:29:46.342001] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:18:51.051 [2024-10-07 11:29:46.342395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72188 ] 00:18:51.051 [2024-10-07 11:29:46.476594] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.309 [2024-10-07 11:29:46.588982] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.309 [2024-10-07 11:29:46.646373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:51.874 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:51.874 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:51.874 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KMgiJFx5lT 00:18:52.132 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:52.390 [2024-10-07 11:29:47.818108] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.390 TLSTESTn1 00:18:52.390 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:52.648 Running I/O for 10 seconds... 00:18:54.514 4071.00 IOPS, 15.90 MiB/s [2024-10-07T11:29:51.450Z] 4044.50 IOPS, 15.80 MiB/s [2024-10-07T11:29:52.018Z] 4025.33 IOPS, 15.72 MiB/s [2024-10-07T11:29:53.394Z] 4009.50 IOPS, 15.66 MiB/s [2024-10-07T11:29:54.396Z] 4027.00 IOPS, 15.73 MiB/s [2024-10-07T11:29:55.336Z] 4044.00 IOPS, 15.80 MiB/s [2024-10-07T11:29:56.269Z] 4051.29 IOPS, 15.83 MiB/s [2024-10-07T11:29:57.238Z] 4033.50 IOPS, 15.76 MiB/s [2024-10-07T11:29:58.172Z] 4029.00 IOPS, 15.74 MiB/s [2024-10-07T11:29:58.172Z] 4032.40 IOPS, 15.75 MiB/s 00:19:02.649 Latency(us) 00:19:02.649 [2024-10-07T11:29:58.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.649 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:02.649 Verification LBA range: start 0x0 length 0x2000 00:19:02.649 TLSTESTn1 : 10.02 4038.40 15.77 0.00 0.00 31637.67 6613.18 33363.78 00:19:02.649 [2024-10-07T11:29:58.172Z] =================================================================================================================== 00:19:02.649 [2024-10-07T11:29:58.172Z] Total : 4038.40 15.77 0.00 0.00 31637.67 6613.18 33363.78 00:19:02.649 { 00:19:02.649 "results": [ 00:19:02.649 { 00:19:02.649 "job": "TLSTESTn1", 00:19:02.649 "core_mask": "0x4", 00:19:02.649 "workload": "verify", 00:19:02.649 "status": "finished", 00:19:02.649 "verify_range": { 00:19:02.649 "start": 0, 00:19:02.649 "length": 8192 00:19:02.649 }, 00:19:02.649 "queue_depth": 128, 00:19:02.649 "io_size": 4096, 00:19:02.649 "runtime": 10.016353, 00:19:02.649 "iops": 4038.3960110032062, 00:19:02.649 "mibps": 15.774984417981274, 00:19:02.649 "io_failed": 0, 00:19:02.649 "io_timeout": 0, 00:19:02.649 "avg_latency_us": 31637.66638552646, 00:19:02.649 "min_latency_us": 6613.178181818182, 00:19:02.649 "max_latency_us": 33363.781818181815 00:19:02.649 } 00:19:02.649 ], 00:19:02.649 "core_count": 1 00:19:02.649 } 00:19:02.649 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:02.649 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72188 00:19:02.649 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72188 ']' 00:19:02.649 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72188 00:19:02.649 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:02.649 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:02.649 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72188 00:19:02.649 killing process with pid 72188 00:19:02.649 Received shutdown signal, test time was about 10.000000 seconds 00:19:02.649 00:19:02.649 Latency(us) 00:19:02.649 [2024-10-07T11:29:58.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.649 [2024-10-07T11:29:58.172Z] =================================================================================================================== 00:19:02.649 [2024-10-07T11:29:58.172Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.649 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:02.649 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:02.649 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72188' 00:19:02.649 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72188 00:19:02.649 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72188 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.KMgiJFx5lT 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KMgiJFx5lT 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KMgiJFx5lT 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KMgiJFx5lT 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KMgiJFx5lT 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72324 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72324 /var/tmp/bdevperf.sock 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72324 ']' 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:02.908 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.908 [2024-10-07 11:29:58.384189] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:19:02.908 [2024-10-07 11:29:58.384632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72324 ] 00:19:03.166 [2024-10-07 11:29:58.527605] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.166 [2024-10-07 11:29:58.642860] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:03.423 [2024-10-07 11:29:58.696062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:03.988 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:03.988 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:03.988 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KMgiJFx5lT 00:19:04.284 [2024-10-07 11:29:59.648317] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.KMgiJFx5lT': 0100666 00:19:04.284 [2024-10-07 11:29:59.648396] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:04.284 request: 00:19:04.284 { 00:19:04.284 "name": "key0", 00:19:04.284 "path": "/tmp/tmp.KMgiJFx5lT", 00:19:04.284 "method": "keyring_file_add_key", 00:19:04.284 "req_id": 1 00:19:04.284 } 00:19:04.284 Got JSON-RPC error response 00:19:04.284 response: 00:19:04.284 { 00:19:04.284 "code": -1, 00:19:04.284 "message": "Operation not permitted" 00:19:04.284 } 00:19:04.284 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:04.564 [2024-10-07 11:29:59.912495] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:04.564 [2024-10-07 11:29:59.912570] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:04.564 request: 00:19:04.564 { 00:19:04.564 "name": "TLSTEST", 00:19:04.564 "trtype": "tcp", 00:19:04.564 "traddr": "10.0.0.3", 00:19:04.564 "adrfam": "ipv4", 00:19:04.564 "trsvcid": "4420", 00:19:04.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.564 "prchk_reftag": false, 00:19:04.564 "prchk_guard": false, 00:19:04.564 "hdgst": false, 00:19:04.564 "ddgst": false, 00:19:04.564 "psk": "key0", 00:19:04.564 "allow_unrecognized_csi": false, 00:19:04.564 "method": "bdev_nvme_attach_controller", 00:19:04.564 "req_id": 1 00:19:04.564 } 00:19:04.564 Got JSON-RPC error response 00:19:04.564 response: 00:19:04.564 { 00:19:04.564 "code": -126, 00:19:04.564 "message": "Required key not available" 00:19:04.564 } 00:19:04.564 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72324 00:19:04.564 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72324 ']' 00:19:04.564 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72324 00:19:04.564 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:04.564 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:04.564 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72324 00:19:04.564 killing process with pid 72324 00:19:04.564 Received shutdown signal, test time was about 10.000000 seconds 00:19:04.564 00:19:04.564 Latency(us) 00:19:04.564 [2024-10-07T11:30:00.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.564 [2024-10-07T11:30:00.087Z] =================================================================================================================== 00:19:04.564 [2024-10-07T11:30:00.087Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:04.564 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:04.564 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:04.564 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72324' 00:19:04.564 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72324 00:19:04.564 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72324 00:19:04.822 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:04.822 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:04.822 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:04.822 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:04.822 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:04.822 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 72127 00:19:04.822 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72127 ']' 00:19:04.822 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72127 00:19:04.822 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:04.822 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:04.822 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72127 00:19:04.822 killing process with pid 72127 00:19:04.822 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:04.822 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:04.822 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72127' 00:19:04.822 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72127 00:19:04.822 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72127 00:19:05.080 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:05.080 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:05.080 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:05.080 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.080 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72363 00:19:05.080 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:05.080 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72363 00:19:05.080 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72363 ']' 00:19:05.080 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.080 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:05.080 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.080 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:05.080 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.080 [2024-10-07 11:30:00.537330] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:19:05.080 [2024-10-07 11:30:00.538638] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.339 [2024-10-07 11:30:00.672556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.339 [2024-10-07 11:30:00.791254] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.339 [2024-10-07 11:30:00.791545] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.339 [2024-10-07 11:30:00.791568] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:05.339 [2024-10-07 11:30:00.791577] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:05.339 [2024-10-07 11:30:00.791585] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.339 [2024-10-07 11:30:00.792002] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.339 [2024-10-07 11:30:00.847744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:06.273 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:06.273 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:06.273 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:06.273 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:06.273 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.273 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.273 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.KMgiJFx5lT 00:19:06.273 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:06.273 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.KMgiJFx5lT 00:19:06.273 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:06.273 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.273 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:06.273 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.273 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.KMgiJFx5lT 00:19:06.273 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KMgiJFx5lT 00:19:06.273 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:06.530 [2024-10-07 11:30:01.934631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.531 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:06.788 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:19:07.046 [2024-10-07 11:30:02.506740] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:07.046 [2024-10-07 11:30:02.506987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:07.046 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:07.304 malloc0 00:19:07.304 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:07.869 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KMgiJFx5lT 00:19:08.126 [2024-10-07 11:30:03.469579] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.KMgiJFx5lT': 0100666 00:19:08.126 [2024-10-07 11:30:03.469637] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:08.126 request: 00:19:08.126 { 00:19:08.126 "name": "key0", 00:19:08.126 "path": "/tmp/tmp.KMgiJFx5lT", 00:19:08.126 "method": "keyring_file_add_key", 00:19:08.126 "req_id": 1 00:19:08.126 } 00:19:08.126 Got JSON-RPC error response 00:19:08.126 response: 00:19:08.126 { 00:19:08.126 "code": -1, 00:19:08.126 "message": "Operation not permitted" 00:19:08.126 } 00:19:08.126 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:08.383 [2024-10-07 11:30:03.773679] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:08.383 [2024-10-07 11:30:03.773757] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:08.383 request: 00:19:08.383 { 00:19:08.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.383 "host": "nqn.2016-06.io.spdk:host1", 00:19:08.383 "psk": "key0", 00:19:08.383 "method": "nvmf_subsystem_add_host", 00:19:08.383 "req_id": 1 00:19:08.383 } 00:19:08.383 Got JSON-RPC error response 00:19:08.383 response: 00:19:08.383 { 00:19:08.383 "code": -32603, 00:19:08.383 "message": "Internal error" 00:19:08.383 } 00:19:08.383 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:08.383 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:08.383 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:08.383 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:08.383 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72363 00:19:08.383 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72363 ']' 00:19:08.383 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72363 00:19:08.383 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:08.383 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:08.384 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72363 00:19:08.384 killing process with pid 72363 00:19:08.384 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:08.384 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:08.384 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72363' 00:19:08.384 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72363 00:19:08.384 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72363 00:19:08.642 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.KMgiJFx5lT 00:19:08.642 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:08.642 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:08.642 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:08.642 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.642 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72432 00:19:08.642 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:08.642 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72432 00:19:08.642 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72432 ']' 00:19:08.642 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.642 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:08.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.642 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.642 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:08.642 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.642 [2024-10-07 11:30:04.144127] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:19:08.642 [2024-10-07 11:30:04.144236] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.900 [2024-10-07 11:30:04.284827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.900 [2024-10-07 11:30:04.409721] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.900 [2024-10-07 11:30:04.409778] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.900 [2024-10-07 11:30:04.409792] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.900 [2024-10-07 11:30:04.409803] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.900 [2024-10-07 11:30:04.409812] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.900 [2024-10-07 11:30:04.410266] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.158 [2024-10-07 11:30:04.466155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:09.725 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:09.725 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:09.725 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:09.725 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:09.725 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.983 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.983 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.KMgiJFx5lT 00:19:09.983 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KMgiJFx5lT 00:19:09.983 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:09.983 [2024-10-07 11:30:05.493049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.240 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:10.498 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:19:10.756 [2024-10-07 11:30:06.069171] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:10.756 [2024-10-07 11:30:06.069427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:10.756 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:11.014 malloc0 00:19:11.014 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:11.272 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KMgiJFx5lT 00:19:11.530 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:12.096 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72493 00:19:12.096 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:12.096 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:12.096 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72493 /var/tmp/bdevperf.sock 00:19:12.096 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72493 ']' 00:19:12.096 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:12.096 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:12.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:12.096 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:12.096 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:12.096 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.096 [2024-10-07 11:30:07.376198] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:19:12.096 [2024-10-07 11:30:07.376297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72493 ] 00:19:12.096 [2024-10-07 11:30:07.518828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.354 [2024-10-07 11:30:07.645067] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.354 [2024-10-07 11:30:07.702975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:13.290 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:13.290 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:13.290 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KMgiJFx5lT 00:19:13.290 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:13.549 [2024-10-07 11:30:08.981607] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:13.549 TLSTESTn1 00:19:13.549 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:14.115 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:14.115 "subsystems": [ 00:19:14.115 { 00:19:14.115 "subsystem": "keyring", 00:19:14.115 "config": [ 00:19:14.115 { 00:19:14.115 "method": "keyring_file_add_key", 00:19:14.115 "params": { 00:19:14.115 "name": "key0", 00:19:14.115 "path": "/tmp/tmp.KMgiJFx5lT" 00:19:14.115 } 00:19:14.115 } 00:19:14.115 ] 00:19:14.115 }, 00:19:14.115 { 00:19:14.115 "subsystem": "iobuf", 00:19:14.115 "config": [ 00:19:14.115 { 00:19:14.115 "method": "iobuf_set_options", 00:19:14.115 "params": { 00:19:14.115 "small_pool_count": 8192, 00:19:14.115 "large_pool_count": 1024, 00:19:14.115 "small_bufsize": 8192, 00:19:14.115 "large_bufsize": 135168 00:19:14.115 } 00:19:14.115 } 00:19:14.115 ] 00:19:14.115 }, 00:19:14.115 { 00:19:14.115 "subsystem": "sock", 00:19:14.115 "config": [ 00:19:14.115 { 00:19:14.115 "method": "sock_set_default_impl", 00:19:14.115 "params": { 00:19:14.115 "impl_name": "uring" 00:19:14.115 } 00:19:14.115 }, 00:19:14.115 { 00:19:14.115 "method": "sock_impl_set_options", 00:19:14.115 "params": { 00:19:14.115 "impl_name": "ssl", 00:19:14.115 "recv_buf_size": 4096, 00:19:14.115 "send_buf_size": 4096, 00:19:14.115 "enable_recv_pipe": true, 00:19:14.115 "enable_quickack": false, 00:19:14.115 "enable_placement_id": 0, 00:19:14.115 "enable_zerocopy_send_server": true, 00:19:14.115 "enable_zerocopy_send_client": false, 00:19:14.115 "zerocopy_threshold": 0, 00:19:14.115 "tls_version": 0, 00:19:14.115 "enable_ktls": false 00:19:14.115 } 00:19:14.115 }, 00:19:14.115 { 00:19:14.115 "method": "sock_impl_set_options", 00:19:14.115 "params": { 00:19:14.115 "impl_name": "posix", 00:19:14.115 "recv_buf_size": 2097152, 00:19:14.115 "send_buf_size": 2097152, 00:19:14.115 "enable_recv_pipe": true, 00:19:14.115 "enable_quickack": false, 00:19:14.115 "enable_placement_id": 0, 00:19:14.115 "enable_zerocopy_send_server": true, 00:19:14.115 "enable_zerocopy_send_client": false, 00:19:14.115 "zerocopy_threshold": 0, 00:19:14.115 "tls_version": 0, 00:19:14.115 "enable_ktls": false 00:19:14.115 } 00:19:14.115 }, 00:19:14.115 { 00:19:14.115 "method": "sock_impl_set_options", 00:19:14.115 "params": { 00:19:14.115 "impl_name": "uring", 00:19:14.115 "recv_buf_size": 2097152, 00:19:14.115 "send_buf_size": 2097152, 00:19:14.115 "enable_recv_pipe": true, 00:19:14.115 "enable_quickack": false, 00:19:14.115 "enable_placement_id": 0, 00:19:14.115 "enable_zerocopy_send_server": false, 00:19:14.115 "enable_zerocopy_send_client": false, 00:19:14.115 "zerocopy_threshold": 0, 00:19:14.115 "tls_version": 0, 00:19:14.115 "enable_ktls": false 00:19:14.116 } 00:19:14.116 } 00:19:14.116 ] 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "subsystem": "vmd", 00:19:14.116 "config": [] 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "subsystem": "accel", 00:19:14.116 "config": [ 00:19:14.116 { 00:19:14.116 "method": "accel_set_options", 00:19:14.116 "params": { 00:19:14.116 "small_cache_size": 128, 00:19:14.116 "large_cache_size": 16, 00:19:14.116 "task_count": 2048, 00:19:14.116 "sequence_count": 2048, 00:19:14.116 "buf_count": 2048 00:19:14.116 } 00:19:14.116 } 00:19:14.116 ] 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "subsystem": "bdev", 00:19:14.116 "config": [ 00:19:14.116 { 00:19:14.116 "method": "bdev_set_options", 00:19:14.116 "params": { 00:19:14.116 "bdev_io_pool_size": 65535, 00:19:14.116 "bdev_io_cache_size": 256, 00:19:14.116 "bdev_auto_examine": true, 00:19:14.116 "iobuf_small_cache_size": 128, 00:19:14.116 "iobuf_large_cache_size": 16 00:19:14.116 } 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "method": "bdev_raid_set_options", 00:19:14.116 "params": { 00:19:14.116 "process_window_size_kb": 1024, 00:19:14.116 "process_max_bandwidth_mb_sec": 0 00:19:14.116 } 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "method": "bdev_iscsi_set_options", 00:19:14.116 "params": { 00:19:14.116 "timeout_sec": 30 00:19:14.116 } 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "method": "bdev_nvme_set_options", 00:19:14.116 "params": { 00:19:14.116 "action_on_timeout": "none", 00:19:14.116 "timeout_us": 0, 00:19:14.116 "timeout_admin_us": 0, 00:19:14.116 "keep_alive_timeout_ms": 10000, 00:19:14.116 "arbitration_burst": 0, 00:19:14.116 "low_priority_weight": 0, 00:19:14.116 "medium_priority_weight": 0, 00:19:14.116 "high_priority_weight": 0, 00:19:14.116 "nvme_adminq_poll_period_us": 10000, 00:19:14.116 "nvme_ioq_poll_period_us": 0, 00:19:14.116 "io_queue_requests": 0, 00:19:14.116 "delay_cmd_submit": true, 00:19:14.116 "transport_retry_count": 4, 00:19:14.116 "bdev_retry_count": 3, 00:19:14.116 "transport_ack_timeout": 0, 00:19:14.116 "ctrlr_loss_timeout_sec": 0, 00:19:14.116 "reconnect_delay_sec": 0, 00:19:14.116 "fast_io_fail_timeout_sec": 0, 00:19:14.116 "disable_auto_failback": false, 00:19:14.116 "generate_uuids": false, 00:19:14.116 "transport_tos": 0, 00:19:14.116 "nvme_error_stat": false, 00:19:14.116 "rdma_srq_size": 0, 00:19:14.116 "io_path_stat": false, 00:19:14.116 "allow_accel_sequence": false, 00:19:14.116 "rdma_max_cq_size": 0, 00:19:14.116 "rdma_cm_event_timeout_ms": 0, 00:19:14.116 "dhchap_digests": [ 00:19:14.116 "sha256", 00:19:14.116 "sha384", 00:19:14.116 "sha512" 00:19:14.116 ], 00:19:14.116 "dhchap_dhgroups": [ 00:19:14.116 "null", 00:19:14.116 "ffdhe2048", 00:19:14.116 "ffdhe3072", 00:19:14.116 "ffdhe4096", 00:19:14.116 "ffdhe6144", 00:19:14.116 "ffdhe8192" 00:19:14.116 ] 00:19:14.116 } 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "method": "bdev_nvme_set_hotplug", 00:19:14.116 "params": { 00:19:14.116 "period_us": 100000, 00:19:14.116 "enable": false 00:19:14.116 } 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "method": "bdev_malloc_create", 00:19:14.116 "params": { 00:19:14.116 "name": "malloc0", 00:19:14.116 "num_blocks": 8192, 00:19:14.116 "block_size": 4096, 00:19:14.116 "physical_block_size": 4096, 00:19:14.116 "uuid": "c34b86e8-96f0-40c9-a8c9-31c3cf3cdd62", 00:19:14.116 "optimal_io_boundary": 0, 00:19:14.116 "md_size": 0, 00:19:14.116 "dif_type": 0, 00:19:14.116 "dif_is_head_of_md": false, 00:19:14.116 "dif_pi_format": 0 00:19:14.116 } 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "method": "bdev_wait_for_examine" 00:19:14.116 } 00:19:14.116 ] 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "subsystem": "nbd", 00:19:14.116 "config": [] 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "subsystem": "scheduler", 00:19:14.116 "config": [ 00:19:14.116 { 00:19:14.116 "method": "framework_set_scheduler", 00:19:14.116 "params": { 00:19:14.116 "name": "static" 00:19:14.116 } 00:19:14.116 } 00:19:14.116 ] 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "subsystem": "nvmf", 00:19:14.116 "config": [ 00:19:14.116 { 00:19:14.116 "method": "nvmf_set_config", 00:19:14.116 "params": { 00:19:14.116 "discovery_filter": "match_any", 00:19:14.116 "admin_cmd_passthru": { 00:19:14.116 "identify_ctrlr": false 00:19:14.116 }, 00:19:14.116 "dhchap_digests": [ 00:19:14.116 "sha256", 00:19:14.116 "sha384", 00:19:14.116 "sha512" 00:19:14.116 ], 00:19:14.116 "dhchap_dhgroups": [ 00:19:14.116 "null", 00:19:14.116 "ffdhe2048", 00:19:14.116 "ffdhe3072", 00:19:14.116 "ffdhe4096", 00:19:14.116 "ffdhe6144", 00:19:14.116 "ffdhe8192" 00:19:14.116 ] 00:19:14.116 } 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "method": "nvmf_set_max_subsystems", 00:19:14.116 "params": { 00:19:14.116 "max_subsystems": 1024 00:19:14.116 } 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "method": "nvmf_set_crdt", 00:19:14.116 "params": { 00:19:14.116 "crdt1": 0, 00:19:14.116 "crdt2": 0, 00:19:14.116 "crdt3": 0 00:19:14.116 } 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "method": "nvmf_create_transport", 00:19:14.116 "params": { 00:19:14.116 "trtype": "TCP", 00:19:14.116 "max_queue_depth": 128, 00:19:14.116 "max_io_qpairs_per_ctrlr": 127, 00:19:14.116 "in_capsule_data_size": 4096, 00:19:14.116 "max_io_size": 131072, 00:19:14.116 "io_unit_size": 131072, 00:19:14.116 "max_aq_depth": 128, 00:19:14.116 "num_shared_buffers": 511, 00:19:14.116 "buf_cache_size": 4294967295, 00:19:14.116 "dif_insert_or_strip": false, 00:19:14.116 "zcopy": false, 00:19:14.116 "c2h_success": false, 00:19:14.116 "sock_priority": 0, 00:19:14.116 "abort_timeout_sec": 1, 00:19:14.116 "ack_timeout": 0, 00:19:14.116 "data_wr_pool_size": 0 00:19:14.116 } 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "method": "nvmf_create_subsystem", 00:19:14.116 "params": { 00:19:14.116 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.116 "allow_any_host": false, 00:19:14.116 "serial_number": "SPDK00000000000001", 00:19:14.116 "model_number": "SPDK bdev Controller", 00:19:14.116 "max_namespaces": 10, 00:19:14.116 "min_cntlid": 1, 00:19:14.116 "max_cntlid": 65519, 00:19:14.116 "ana_reporting": false 00:19:14.116 } 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "method": "nvmf_subsystem_add_host", 00:19:14.116 "params": { 00:19:14.116 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.116 "host": "nqn.2016-06.io.spdk:host1", 00:19:14.116 "psk": "key0" 00:19:14.116 } 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "method": "nvmf_subsystem_add_ns", 00:19:14.116 "params": { 00:19:14.116 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.116 "namespace": { 00:19:14.116 "nsid": 1, 00:19:14.116 "bdev_name": "malloc0", 00:19:14.116 "nguid": "C34B86E896F040C9A8C931C3CF3CDD62", 00:19:14.116 "uuid": "c34b86e8-96f0-40c9-a8c9-31c3cf3cdd62", 00:19:14.116 "no_auto_visible": false 00:19:14.116 } 00:19:14.116 } 00:19:14.116 }, 00:19:14.116 { 00:19:14.116 "method": "nvmf_subsystem_add_listener", 00:19:14.116 "params": { 00:19:14.116 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.116 "listen_address": { 00:19:14.116 "trtype": "TCP", 00:19:14.116 "adrfam": "IPv4", 00:19:14.116 "traddr": "10.0.0.3", 00:19:14.116 "trsvcid": "4420" 00:19:14.116 }, 00:19:14.116 "secure_channel": true 00:19:14.116 } 00:19:14.116 } 00:19:14.116 ] 00:19:14.116 } 00:19:14.116 ] 00:19:14.116 }' 00:19:14.116 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:14.375 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:14.375 "subsystems": [ 00:19:14.375 { 00:19:14.375 "subsystem": "keyring", 00:19:14.375 "config": [ 00:19:14.375 { 00:19:14.375 "method": "keyring_file_add_key", 00:19:14.375 "params": { 00:19:14.375 "name": "key0", 00:19:14.375 "path": "/tmp/tmp.KMgiJFx5lT" 00:19:14.375 } 00:19:14.375 } 00:19:14.375 ] 00:19:14.375 }, 00:19:14.375 { 00:19:14.375 "subsystem": "iobuf", 00:19:14.375 "config": [ 00:19:14.375 { 00:19:14.375 "method": "iobuf_set_options", 00:19:14.375 "params": { 00:19:14.375 "small_pool_count": 8192, 00:19:14.375 "large_pool_count": 1024, 00:19:14.375 "small_bufsize": 8192, 00:19:14.375 "large_bufsize": 135168 00:19:14.375 } 00:19:14.375 } 00:19:14.375 ] 00:19:14.375 }, 00:19:14.375 { 00:19:14.375 "subsystem": "sock", 00:19:14.375 "config": [ 00:19:14.375 { 00:19:14.375 "method": "sock_set_default_impl", 00:19:14.375 "params": { 00:19:14.375 "impl_name": "uring" 00:19:14.375 } 00:19:14.375 }, 00:19:14.375 { 00:19:14.375 "method": "sock_impl_set_options", 00:19:14.375 "params": { 00:19:14.375 "impl_name": "ssl", 00:19:14.375 "recv_buf_size": 4096, 00:19:14.375 "send_buf_size": 4096, 00:19:14.375 "enable_recv_pipe": true, 00:19:14.375 "enable_quickack": false, 00:19:14.375 "enable_placement_id": 0, 00:19:14.375 "enable_zerocopy_send_server": true, 00:19:14.375 "enable_zerocopy_send_client": false, 00:19:14.375 "zerocopy_threshold": 0, 00:19:14.375 "tls_version": 0, 00:19:14.375 "enable_ktls": false 00:19:14.375 } 00:19:14.375 }, 00:19:14.375 { 00:19:14.375 "method": "sock_impl_set_options", 00:19:14.375 "params": { 00:19:14.375 "impl_name": "posix", 00:19:14.375 "recv_buf_size": 2097152, 00:19:14.375 "send_buf_size": 2097152, 00:19:14.375 "enable_recv_pipe": true, 00:19:14.375 "enable_quickack": false, 00:19:14.375 "enable_placement_id": 0, 00:19:14.375 "enable_zerocopy_send_server": true, 00:19:14.375 "enable_zerocopy_send_client": false, 00:19:14.375 "zerocopy_threshold": 0, 00:19:14.375 "tls_version": 0, 00:19:14.375 "enable_ktls": false 00:19:14.375 } 00:19:14.375 }, 00:19:14.375 { 00:19:14.375 "method": "sock_impl_set_options", 00:19:14.375 "params": { 00:19:14.375 "impl_name": "uring", 00:19:14.375 "recv_buf_size": 2097152, 00:19:14.375 "send_buf_size": 2097152, 00:19:14.375 "enable_recv_pipe": true, 00:19:14.376 "enable_quickack": false, 00:19:14.376 "enable_placement_id": 0, 00:19:14.376 "enable_zerocopy_send_server": false, 00:19:14.376 "enable_zerocopy_send_client": false, 00:19:14.376 "zerocopy_threshold": 0, 00:19:14.376 "tls_version": 0, 00:19:14.376 "enable_ktls": false 00:19:14.376 } 00:19:14.376 } 00:19:14.376 ] 00:19:14.376 }, 00:19:14.376 { 00:19:14.376 "subsystem": "vmd", 00:19:14.376 "config": [] 00:19:14.376 }, 00:19:14.376 { 00:19:14.376 "subsystem": "accel", 00:19:14.376 "config": [ 00:19:14.376 { 00:19:14.376 "method": "accel_set_options", 00:19:14.376 "params": { 00:19:14.376 "small_cache_size": 128, 00:19:14.376 "large_cache_size": 16, 00:19:14.376 "task_count": 2048, 00:19:14.376 "sequence_count": 2048, 00:19:14.376 "buf_count": 2048 00:19:14.376 } 00:19:14.376 } 00:19:14.376 ] 00:19:14.376 }, 00:19:14.376 { 00:19:14.376 "subsystem": "bdev", 00:19:14.376 "config": [ 00:19:14.376 { 00:19:14.376 "method": "bdev_set_options", 00:19:14.376 "params": { 00:19:14.376 "bdev_io_pool_size": 65535, 00:19:14.376 "bdev_io_cache_size": 256, 00:19:14.376 "bdev_auto_examine": true, 00:19:14.376 "iobuf_small_cache_size": 128, 00:19:14.376 "iobuf_large_cache_size": 16 00:19:14.376 } 00:19:14.376 }, 00:19:14.376 { 00:19:14.376 "method": "bdev_raid_set_options", 00:19:14.376 "params": { 00:19:14.376 "process_window_size_kb": 1024, 00:19:14.376 "process_max_bandwidth_mb_sec": 0 00:19:14.376 } 00:19:14.376 }, 00:19:14.376 { 00:19:14.376 "method": "bdev_iscsi_set_options", 00:19:14.376 "params": { 00:19:14.376 "timeout_sec": 30 00:19:14.376 } 00:19:14.376 }, 00:19:14.376 { 00:19:14.376 "method": "bdev_nvme_set_options", 00:19:14.376 "params": { 00:19:14.376 "action_on_timeout": "none", 00:19:14.376 "timeout_us": 0, 00:19:14.376 "timeout_admin_us": 0, 00:19:14.376 "keep_alive_timeout_ms": 10000, 00:19:14.376 "arbitration_burst": 0, 00:19:14.376 "low_priority_weight": 0, 00:19:14.376 "medium_priority_weight": 0, 00:19:14.376 "high_priority_weight": 0, 00:19:14.376 "nvme_adminq_poll_period_us": 10000, 00:19:14.376 "nvme_ioq_poll_period_us": 0, 00:19:14.376 "io_queue_requests": 512, 00:19:14.376 "delay_cmd_submit": true, 00:19:14.376 "transport_retry_count": 4, 00:19:14.376 "bdev_retry_count": 3, 00:19:14.376 "transport_ack_timeout": 0, 00:19:14.376 "ctrlr_loss_timeout_sec": 0, 00:19:14.376 "reconnect_delay_sec": 0, 00:19:14.376 "fast_io_fail_timeout_sec": 0, 00:19:14.376 "disable_auto_failback": false, 00:19:14.376 "generate_uuids": false, 00:19:14.376 "transport_tos": 0, 00:19:14.376 "nvme_error_stat": false, 00:19:14.376 "rdma_srq_size": 0, 00:19:14.376 "io_path_stat": false, 00:19:14.376 "allow_accel_sequence": false, 00:19:14.376 "rdma_max_cq_size": 0, 00:19:14.376 "rdma_cm_event_timeout_ms": 0, 00:19:14.376 "dhchap_digests": [ 00:19:14.376 "sha256", 00:19:14.376 "sha384", 00:19:14.376 "sha512" 00:19:14.376 ], 00:19:14.376 "dhchap_dhgroups": [ 00:19:14.376 "null", 00:19:14.376 "ffdhe2048", 00:19:14.376 "ffdhe3072", 00:19:14.376 "ffdhe4096", 00:19:14.376 "ffdhe6144", 00:19:14.376 "ffdhe8192" 00:19:14.376 ] 00:19:14.376 } 00:19:14.376 }, 00:19:14.376 { 00:19:14.376 "method": "bdev_nvme_attach_controller", 00:19:14.376 "params": { 00:19:14.376 "name": "TLSTEST", 00:19:14.376 "trtype": "TCP", 00:19:14.376 "adrfam": "IPv4", 00:19:14.376 "traddr": "10.0.0.3", 00:19:14.376 "trsvcid": "4420", 00:19:14.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.376 "prchk_reftag": false, 00:19:14.376 "prchk_guard": false, 00:19:14.376 "ctrlr_loss_timeout_sec": 0, 00:19:14.376 "reconnect_delay_sec": 0, 00:19:14.376 "fast_io_fail_timeout_sec": 0, 00:19:14.376 "psk": "key0", 00:19:14.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.376 "hdgst": false, 00:19:14.376 "ddgst": false, 00:19:14.376 "multipath": "multipath" 00:19:14.376 } 00:19:14.376 }, 00:19:14.376 { 00:19:14.376 "method": "bdev_nvme_set_hotplug", 00:19:14.376 "params": { 00:19:14.376 "period_us": 100000, 00:19:14.376 "enable": false 00:19:14.376 } 00:19:14.376 }, 00:19:14.376 { 00:19:14.376 "method": "bdev_wait_for_examine" 00:19:14.376 } 00:19:14.376 ] 00:19:14.376 }, 00:19:14.376 { 00:19:14.376 "subsystem": "nbd", 00:19:14.376 "config": [] 00:19:14.376 } 00:19:14.376 ] 00:19:14.376 }' 00:19:14.376 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72493 00:19:14.376 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72493 ']' 00:19:14.376 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72493 00:19:14.376 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:14.376 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:14.376 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72493 00:19:14.634 killing process with pid 72493 00:19:14.634 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.634 00:19:14.634 Latency(us) 00:19:14.634 [2024-10-07T11:30:10.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.634 [2024-10-07T11:30:10.157Z] =================================================================================================================== 00:19:14.634 [2024-10-07T11:30:10.157Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:14.634 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:14.634 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:14.634 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72493' 00:19:14.634 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72493 00:19:14.634 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72493 00:19:14.635 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72432 00:19:14.635 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72432 ']' 00:19:14.635 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72432 00:19:14.635 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:14.635 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:14.635 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72432 00:19:14.892 killing process with pid 72432 00:19:14.892 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:14.892 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:14.892 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72432' 00:19:14.892 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72432 00:19:14.892 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72432 00:19:15.151 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:15.151 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:15.151 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:15.151 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.151 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:15.151 "subsystems": [ 00:19:15.151 { 00:19:15.151 "subsystem": "keyring", 00:19:15.151 "config": [ 00:19:15.151 { 00:19:15.151 "method": "keyring_file_add_key", 00:19:15.151 "params": { 00:19:15.151 "name": "key0", 00:19:15.151 "path": "/tmp/tmp.KMgiJFx5lT" 00:19:15.151 } 00:19:15.151 } 00:19:15.151 ] 00:19:15.151 }, 00:19:15.151 { 00:19:15.151 "subsystem": "iobuf", 00:19:15.151 "config": [ 00:19:15.151 { 00:19:15.151 "method": "iobuf_set_options", 00:19:15.151 "params": { 00:19:15.151 "small_pool_count": 8192, 00:19:15.151 "large_pool_count": 1024, 00:19:15.151 "small_bufsize": 8192, 00:19:15.151 "large_bufsize": 135168 00:19:15.151 } 00:19:15.151 } 00:19:15.151 ] 00:19:15.151 }, 00:19:15.151 { 00:19:15.151 "subsystem": "sock", 00:19:15.151 "config": [ 00:19:15.151 { 00:19:15.151 "method": "sock_set_default_impl", 00:19:15.151 "params": { 00:19:15.151 "impl_name": "uring" 00:19:15.151 } 00:19:15.151 }, 00:19:15.151 { 00:19:15.151 "method": "sock_impl_set_options", 00:19:15.151 "params": { 00:19:15.151 "impl_name": "ssl", 00:19:15.151 "recv_buf_size": 4096, 00:19:15.151 "send_buf_size": 4096, 00:19:15.151 "enable_recv_pipe": true, 00:19:15.151 "enable_quickack": false, 00:19:15.151 "enable_placement_id": 0, 00:19:15.151 "enable_zerocopy_send_server": true, 00:19:15.151 "enable_zerocopy_send_client": false, 00:19:15.151 "zerocopy_threshold": 0, 00:19:15.151 "tls_version": 0, 00:19:15.151 "enable_ktls": false 00:19:15.151 } 00:19:15.151 }, 00:19:15.151 { 00:19:15.151 "method": "sock_impl_set_options", 00:19:15.151 "params": { 00:19:15.151 "impl_name": "posix", 00:19:15.151 "recv_buf_size": 2097152, 00:19:15.151 "send_buf_size": 2097152, 00:19:15.151 "enable_recv_pipe": true, 00:19:15.151 "enable_quickack": false, 00:19:15.151 "enable_placement_id": 0, 00:19:15.151 "enable_zerocopy_send_server": true, 00:19:15.151 "enable_zerocopy_send_client": false, 00:19:15.151 "zerocopy_threshold": 0, 00:19:15.151 "tls_version": 0, 00:19:15.151 "enable_ktls": false 00:19:15.151 } 00:19:15.151 }, 00:19:15.151 { 00:19:15.151 "method": "sock_impl_set_options", 00:19:15.151 "params": { 00:19:15.151 "impl_name": "uring", 00:19:15.151 "recv_buf_size": 2097152, 00:19:15.151 "send_buf_size": 2097152, 00:19:15.151 "enable_recv_pipe": true, 00:19:15.151 "enable_quickack": false, 00:19:15.151 "enable_placement_id": 0, 00:19:15.151 "enable_zerocopy_send_server": false, 00:19:15.151 "enable_zerocopy_send_client": false, 00:19:15.151 "zerocopy_threshold": 0, 00:19:15.151 "tls_version": 0, 00:19:15.151 "enable_ktls": false 00:19:15.151 } 00:19:15.151 } 00:19:15.151 ] 00:19:15.151 }, 00:19:15.151 { 00:19:15.151 "subsystem": "vmd", 00:19:15.151 "config": [] 00:19:15.151 }, 00:19:15.151 { 00:19:15.151 "subsystem": "accel", 00:19:15.151 "config": [ 00:19:15.151 { 00:19:15.151 "method": "accel_set_options", 00:19:15.151 "params": { 00:19:15.151 "small_cache_size": 128, 00:19:15.151 "large_cache_size": 16, 00:19:15.151 "task_count": 2048, 00:19:15.151 "sequence_count": 2048, 00:19:15.151 "buf_count": 2048 00:19:15.151 } 00:19:15.151 } 00:19:15.151 ] 00:19:15.151 }, 00:19:15.151 { 00:19:15.151 "subsystem": "bdev", 00:19:15.151 "config": [ 00:19:15.151 { 00:19:15.151 "method": "bdev_set_options", 00:19:15.151 "params": { 00:19:15.151 "bdev_io_pool_size": 65535, 00:19:15.151 "bdev_io_cache_size": 256, 00:19:15.151 "bdev_auto_examine": true, 00:19:15.151 "iobuf_small_cache_size": 128, 00:19:15.151 "iobuf_large_cache_size": 16 00:19:15.151 } 00:19:15.151 }, 00:19:15.151 { 00:19:15.151 "method": "bdev_raid_set_options", 00:19:15.151 "params": { 00:19:15.151 "process_window_size_kb": 1024, 00:19:15.151 "process_max_bandwidth_mb_sec": 0 00:19:15.151 } 00:19:15.151 }, 00:19:15.151 { 00:19:15.151 "method": "bdev_iscsi_set_options", 00:19:15.151 "params": { 00:19:15.151 "timeout_sec": 30 00:19:15.151 } 00:19:15.151 }, 00:19:15.151 { 00:19:15.151 "method": "bdev_nvme_set_options", 00:19:15.151 "params": { 00:19:15.151 "action_on_timeout": "none", 00:19:15.151 "timeout_us": 0, 00:19:15.151 "timeout_admin_us": 0, 00:19:15.151 "keep_alive_timeout_ms": 10000, 00:19:15.151 "arbitration_burst": 0, 00:19:15.151 "low_priority_weight": 0, 00:19:15.151 "medium_priority_weight": 0, 00:19:15.151 "high_priority_weight": 0, 00:19:15.151 "nvme_adminq_poll_period_us": 10000, 00:19:15.151 "nvme_ioq_poll_period_us": 0, 00:19:15.151 "io_queue_requests": 0, 00:19:15.151 "delay_cmd_submit": true, 00:19:15.151 "transport_retry_count": 4, 00:19:15.151 "bdev_retry_count": 3, 00:19:15.151 "transport_ack_timeout": 0, 00:19:15.151 "ctrlr_loss_timeout_sec": 0, 00:19:15.151 "reconnect_delay_sec": 0, 00:19:15.151 "fast_io_fail_timeout_sec": 0, 00:19:15.151 "disable_auto_failback": false, 00:19:15.151 "generate_uuids": false, 00:19:15.151 "transport_tos": 0, 00:19:15.151 "nvme_error_stat": false, 00:19:15.151 "rdma_srq_size": 0, 00:19:15.151 "io_path_stat": false, 00:19:15.151 "allow_accel_sequence": false, 00:19:15.151 "rdma_max_cq_size": 0, 00:19:15.151 "rdma_cm_event_timeout_ms": 0, 00:19:15.151 "dhchap_digests": [ 00:19:15.151 "sha256", 00:19:15.151 "sha384", 00:19:15.151 "sha512" 00:19:15.151 ], 00:19:15.151 "dhchap_dhgroups": [ 00:19:15.151 "null", 00:19:15.151 "ffdhe2048", 00:19:15.151 "ffdhe3072", 00:19:15.151 "ffdhe4096", 00:19:15.151 "ffdhe6144", 00:19:15.151 "ffdhe8192" 00:19:15.151 ] 00:19:15.151 } 00:19:15.151 }, 00:19:15.151 { 00:19:15.151 "method": "bdev_nvme_set_hotplug", 00:19:15.152 "params": { 00:19:15.152 "period_us": 100000, 00:19:15.152 "enable": false 00:19:15.152 } 00:19:15.152 }, 00:19:15.152 { 00:19:15.152 "method": "bdev_malloc_create", 00:19:15.152 "params": { 00:19:15.152 "name": "malloc0", 00:19:15.152 "num_blocks": 8192, 00:19:15.152 "block_size": 4096, 00:19:15.152 "physical_block_size": 4096, 00:19:15.152 "uuid": "c34b86e8-96f0-40c9-a8c9-31c3cf3cdd62", 00:19:15.152 "optimal_io_boundary": 0, 00:19:15.152 "md_size": 0, 00:19:15.152 "dif_type": 0, 00:19:15.152 "dif_is_head_of_md": false, 00:19:15.152 "dif_pi_format": 0 00:19:15.152 } 00:19:15.152 }, 00:19:15.152 { 00:19:15.152 "method": "bdev_wait_for_examine" 00:19:15.152 } 00:19:15.152 ] 00:19:15.152 }, 00:19:15.152 { 00:19:15.152 "subsystem": "nbd", 00:19:15.152 "config": [] 00:19:15.152 }, 00:19:15.152 { 00:19:15.152 "subsystem": "scheduler", 00:19:15.152 "config": [ 00:19:15.152 { 00:19:15.152 "method": "framework_set_scheduler", 00:19:15.152 "params": { 00:19:15.152 "name": "static" 00:19:15.152 } 00:19:15.152 } 00:19:15.152 ] 00:19:15.152 }, 00:19:15.152 { 00:19:15.152 "subsystem": "nvmf", 00:19:15.152 "config": [ 00:19:15.152 { 00:19:15.152 "method": "nvmf_set_config", 00:19:15.152 "params": { 00:19:15.152 "discovery_filter": "match_any", 00:19:15.152 "admin_cmd_passthru": { 00:19:15.152 "identify_ctrlr": false 00:19:15.152 }, 00:19:15.152 "dhchap_digests": [ 00:19:15.152 "sha256", 00:19:15.152 "sha384", 00:19:15.152 "sha512" 00:19:15.152 ], 00:19:15.152 "dhchap_dhgroups": [ 00:19:15.152 "null", 00:19:15.152 "ffdhe2048", 00:19:15.152 "ffdhe3072", 00:19:15.152 "ffdhe4096", 00:19:15.152 "ffdhe6144", 00:19:15.152 "ffdhe8192" 00:19:15.152 ] 00:19:15.152 } 00:19:15.152 }, 00:19:15.152 { 00:19:15.152 "method": "nvmf_set_max_subsystems", 00:19:15.152 "params": { 00:19:15.152 "max_subsystems": 1024 00:19:15.152 } 00:19:15.152 }, 00:19:15.152 { 00:19:15.152 "method": "nvmf_set_crdt", 00:19:15.152 "params": { 00:19:15.152 "crdt1": 0, 00:19:15.152 "crdt2": 0, 00:19:15.152 "crdt3": 0 00:19:15.152 } 00:19:15.152 }, 00:19:15.152 { 00:19:15.152 "method": "nvmf_create_transport", 00:19:15.152 "params": { 00:19:15.152 "trtype": "TCP", 00:19:15.152 "max_queue_depth": 128, 00:19:15.152 "max_io_qpairs_per_ctrlr": 127, 00:19:15.152 "in_capsule_data_size": 4096, 00:19:15.152 "max_io_size": 131072, 00:19:15.152 "io_unit_size": 131072, 00:19:15.152 "max_aq_depth": 128, 00:19:15.152 "num_shared_buffers": 511, 00:19:15.152 "buf_cache_size": 4294967295, 00:19:15.152 "dif_insert_or_strip": false, 00:19:15.152 "zcopy": false, 00:19:15.152 "c2h_success": false, 00:19:15.152 "sock_priority": 0, 00:19:15.152 "abort_timeout_sec": 1, 00:19:15.152 "ack_timeout": 0, 00:19:15.152 "data_wr_pool_size": 0 00:19:15.152 } 00:19:15.152 }, 00:19:15.152 { 00:19:15.152 "method": "nvmf_create_subsystem", 00:19:15.152 "params": { 00:19:15.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.152 "allow_any_host": false, 00:19:15.152 "serial_number": "SPDK00000000000001", 00:19:15.152 "model_number": "SPDK bdev Controller", 00:19:15.152 "max_namespaces": 10, 00:19:15.152 "min_cntlid": 1, 00:19:15.152 "max_cntlid": 65519, 00:19:15.152 "ana_reporting": false 00:19:15.152 } 00:19:15.152 }, 00:19:15.152 { 00:19:15.152 "method": "nvmf_subsystem_add_host", 00:19:15.152 "params": { 00:19:15.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.152 "host": "nqn.2016-06.io.spdk:host1", 00:19:15.152 "psk": "key0" 00:19:15.152 } 00:19:15.152 }, 00:19:15.152 { 00:19:15.152 "method": "nvmf_subsystem_add_ns", 00:19:15.152 "params": { 00:19:15.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.152 "namespace": { 00:19:15.152 "nsid": 1, 00:19:15.152 "bdev_name": "malloc0", 00:19:15.152 "nguid": "C34B86E896F040C9A8C931C3CF3CDD62", 00:19:15.152 "uuid": "c34b86e8-96f0-40c9-a8c9-31c3cf3cdd62", 00:19:15.152 "no_auto_visible": false 00:19:15.152 } 00:19:15.152 } 00:19:15.152 }, 00:19:15.152 { 00:19:15.152 "method": "nvmf_subsystem_add_listener", 00:19:15.152 "params": { 00:19:15.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.152 "listen_address": { 00:19:15.152 "trtype": "TCP", 00:19:15.152 "adrfam": "IPv4", 00:19:15.152 "traddr": "10.0.0.3", 00:19:15.152 "trsvcid": "4420" 00:19:15.152 }, 00:19:15.152 "secure_channel": true 00:19:15.152 } 00:19:15.152 } 00:19:15.152 ] 00:19:15.152 } 00:19:15.152 ] 00:19:15.152 }' 00:19:15.152 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72548 00:19:15.152 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72548 00:19:15.152 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:15.152 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72548 ']' 00:19:15.152 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.152 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:15.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.152 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.152 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:15.152 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.152 [2024-10-07 11:30:10.484433] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:19:15.152 [2024-10-07 11:30:10.484575] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.152 [2024-10-07 11:30:10.625780] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.410 [2024-10-07 11:30:10.738883] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.410 [2024-10-07 11:30:10.738947] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.410 [2024-10-07 11:30:10.738959] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.410 [2024-10-07 11:30:10.738967] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.410 [2024-10-07 11:30:10.738974] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.410 [2024-10-07 11:30:10.739455] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.410 [2024-10-07 11:30:10.905760] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:15.668 [2024-10-07 11:30:10.983924] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.668 [2024-10-07 11:30:11.022779] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:15.668 [2024-10-07 11:30:11.023000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:16.234 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:16.234 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:16.234 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:16.235 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:16.235 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.235 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.235 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72580 00:19:16.235 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72580 /var/tmp/bdevperf.sock 00:19:16.235 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72580 ']' 00:19:16.235 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.235 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:16.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.235 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.235 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:16.235 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.235 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:16.235 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:16.235 "subsystems": [ 00:19:16.235 { 00:19:16.235 "subsystem": "keyring", 00:19:16.235 "config": [ 00:19:16.235 { 00:19:16.235 "method": "keyring_file_add_key", 00:19:16.235 "params": { 00:19:16.235 "name": "key0", 00:19:16.235 "path": "/tmp/tmp.KMgiJFx5lT" 00:19:16.235 } 00:19:16.235 } 00:19:16.235 ] 00:19:16.235 }, 00:19:16.235 { 00:19:16.235 "subsystem": "iobuf", 00:19:16.235 "config": [ 00:19:16.235 { 00:19:16.235 "method": "iobuf_set_options", 00:19:16.235 "params": { 00:19:16.235 "small_pool_count": 8192, 00:19:16.235 "large_pool_count": 1024, 00:19:16.235 "small_bufsize": 8192, 00:19:16.235 "large_bufsize": 135168 00:19:16.235 } 00:19:16.235 } 00:19:16.235 ] 00:19:16.235 }, 00:19:16.235 { 00:19:16.235 "subsystem": "sock", 00:19:16.235 "config": [ 00:19:16.235 { 00:19:16.235 "method": "sock_set_default_impl", 00:19:16.235 "params": { 00:19:16.235 "impl_name": "uring" 00:19:16.235 } 00:19:16.235 }, 00:19:16.235 { 00:19:16.235 "method": "sock_impl_set_options", 00:19:16.235 "params": { 00:19:16.235 "impl_name": "ssl", 00:19:16.235 "recv_buf_size": 4096, 00:19:16.235 "send_buf_size": 4096, 00:19:16.235 "enable_recv_pipe": true, 00:19:16.235 "enable_quickack": false, 00:19:16.235 "enable_placement_id": 0, 00:19:16.235 "enable_zerocopy_send_server": true, 00:19:16.235 "enable_zerocopy_send_client": false, 00:19:16.235 "zerocopy_threshold": 0, 00:19:16.235 "tls_version": 0, 00:19:16.235 "enable_ktls": false 00:19:16.235 } 00:19:16.235 }, 00:19:16.235 { 00:19:16.235 "method": "sock_impl_set_options", 00:19:16.235 "params": { 00:19:16.235 "impl_name": "posix", 00:19:16.235 "recv_buf_size": 2097152, 00:19:16.235 "send_buf_size": 2097152, 00:19:16.235 "enable_recv_pipe": true, 00:19:16.235 "enable_quickack": false, 00:19:16.235 "enable_placement_id": 0, 00:19:16.235 "enable_zerocopy_send_server": true, 00:19:16.235 "enable_zerocopy_send_client": false, 00:19:16.235 "zerocopy_threshold": 0, 00:19:16.235 "tls_version": 0, 00:19:16.235 "enable_ktls": false 00:19:16.235 } 00:19:16.235 }, 00:19:16.235 { 00:19:16.235 "method": "sock_impl_set_options", 00:19:16.235 "params": { 00:19:16.235 "impl_name": "uring", 00:19:16.235 "recv_buf_size": 2097152, 00:19:16.235 "send_buf_size": 2097152, 00:19:16.235 "enable_recv_pipe": true, 00:19:16.235 "enable_quickack": false, 00:19:16.235 "enable_placement_id": 0, 00:19:16.235 "enable_zerocopy_send_server": false, 00:19:16.235 "enable_zerocopy_send_client": false, 00:19:16.235 "zerocopy_threshold": 0, 00:19:16.235 "tls_version": 0, 00:19:16.235 "enable_ktls": false 00:19:16.235 } 00:19:16.235 } 00:19:16.235 ] 00:19:16.235 }, 00:19:16.235 { 00:19:16.235 "subsystem": "vmd", 00:19:16.235 "config": [] 00:19:16.235 }, 00:19:16.235 { 00:19:16.235 "subsystem": "accel", 00:19:16.235 "config": [ 00:19:16.235 { 00:19:16.235 "method": "accel_set_options", 00:19:16.235 "params": { 00:19:16.235 "small_cache_size": 128, 00:19:16.235 "large_cache_size": 16, 00:19:16.235 "task_count": 2048, 00:19:16.235 "sequence_count": 2048, 00:19:16.235 "buf_count": 2048 00:19:16.235 } 00:19:16.235 } 00:19:16.235 ] 00:19:16.235 }, 00:19:16.235 { 00:19:16.235 "subsystem": "bdev", 00:19:16.235 "config": [ 00:19:16.235 { 00:19:16.235 "method": "bdev_set_options", 00:19:16.235 "params": { 00:19:16.235 "bdev_io_pool_size": 65535, 00:19:16.235 "bdev_io_cache_size": 256, 00:19:16.235 "bdev_auto_examine": true, 00:19:16.235 "iobuf_small_cache_size": 128, 00:19:16.235 "iobuf_large_cache_size": 16 00:19:16.235 } 00:19:16.235 }, 00:19:16.235 { 00:19:16.235 "method": "bdev_raid_set_options", 00:19:16.235 "params": { 00:19:16.235 "process_window_size_kb": 1024, 00:19:16.235 "process_max_bandwidth_mb_sec": 0 00:19:16.235 } 00:19:16.235 }, 00:19:16.235 { 00:19:16.235 "method": "bdev_iscsi_set_options", 00:19:16.235 "params": { 00:19:16.235 "timeout_sec": 30 00:19:16.235 } 00:19:16.235 }, 00:19:16.235 { 00:19:16.235 "method": "bdev_nvme_set_options", 00:19:16.235 "params": { 00:19:16.235 "action_on_timeout": "none", 00:19:16.235 "timeout_us": 0, 00:19:16.235 "timeout_admin_us": 0, 00:19:16.235 "keep_alive_timeout_ms": 10000, 00:19:16.235 "arbitration_burst": 0, 00:19:16.235 "low_priority_weight": 0, 00:19:16.235 "medium_priority_weight": 0, 00:19:16.235 "high_priority_weight": 0, 00:19:16.235 "nvme_adminq_poll_period_us": 10000, 00:19:16.235 "nvme_ioq_poll_period_us": 0, 00:19:16.235 "io_queue_requests": 512, 00:19:16.235 "delay_cmd_submit": true, 00:19:16.235 "transport_retry_count": 4, 00:19:16.235 "bdev_retry_count": 3, 00:19:16.235 "transport_ack_timeout": 0, 00:19:16.235 "ctrlr_loss_timeout_sec": 0, 00:19:16.235 "reconnect_delay_sec": 0, 00:19:16.235 "fast_io_fail_timeout_sec": 0, 00:19:16.235 "disable_auto_failback": false, 00:19:16.235 "generate_uuids": false, 00:19:16.235 "transport_tos": 0, 00:19:16.235 "nvme_error_stat": false, 00:19:16.235 "rdma_srq_size": 0, 00:19:16.235 "io_path_stat": false, 00:19:16.235 "allow_accel_sequence": false, 00:19:16.235 "rdma_max_cq_size": 0, 00:19:16.235 "rdma_cm_event_timeout_ms": 0, 00:19:16.235 "dhchap_digests": [ 00:19:16.235 "sha256", 00:19:16.235 "sha384", 00:19:16.235 "sha512" 00:19:16.235 ], 00:19:16.235 "dhchap_dhgroups": [ 00:19:16.235 "null", 00:19:16.235 "ffdhe2048", 00:19:16.235 "ffdhe3072", 00:19:16.235 "ffdhe4096", 00:19:16.235 "ffdhe6144", 00:19:16.235 "ffdhe8192" 00:19:16.235 ] 00:19:16.235 } 00:19:16.235 }, 00:19:16.235 { 00:19:16.235 "method": "bdev_nvme_attach_controller", 00:19:16.235 "params": { 00:19:16.235 "name": "TLSTEST", 00:19:16.235 "trtype": "TCP", 00:19:16.235 "adrfam": "IPv4", 00:19:16.235 "traddr": "10.0.0.3", 00:19:16.235 "trsvcid": "4420", 00:19:16.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.235 "prchk_reftag": false, 00:19:16.235 "prchk_guard": false, 00:19:16.235 "ctrlr_loss_timeout_sec": 0, 00:19:16.235 "reconnect_delay_sec": 0, 00:19:16.235 "fast_io_fail_timeout_sec": 0, 00:19:16.235 "psk": "key0", 00:19:16.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.235 "hdgst": false, 00:19:16.235 "ddgst": false, 00:19:16.235 "multipath": "multipath" 00:19:16.235 } 00:19:16.235 }, 00:19:16.235 { 00:19:16.235 "method": "bdev_nvme_set_hotplug", 00:19:16.235 "params": { 00:19:16.235 "period_us": 100000, 00:19:16.235 "enable": false 00:19:16.235 } 00:19:16.235 }, 00:19:16.235 { 00:19:16.235 "method": "bdev_wait_for_examine" 00:19:16.235 } 00:19:16.235 ] 00:19:16.236 }, 00:19:16.236 { 00:19:16.236 "subsystem": "nbd", 00:19:16.236 "config": [] 00:19:16.236 } 00:19:16.236 ] 00:19:16.236 }' 00:19:16.236 [2024-10-07 11:30:11.555529] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:19:16.236 [2024-10-07 11:30:11.555830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72580 ] 00:19:16.236 [2024-10-07 11:30:11.697123] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.493 [2024-10-07 11:30:11.827792] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.493 [2024-10-07 11:30:11.969084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:16.751 [2024-10-07 11:30:12.022105] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.318 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:17.318 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:17.318 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:17.318 Running I/O for 10 seconds... 00:19:19.671 3921.00 IOPS, 15.32 MiB/s [2024-10-07T11:30:16.173Z] 3960.50 IOPS, 15.47 MiB/s [2024-10-07T11:30:17.107Z] 3993.67 IOPS, 15.60 MiB/s [2024-10-07T11:30:18.042Z] 4002.00 IOPS, 15.63 MiB/s [2024-10-07T11:30:18.975Z] 4013.60 IOPS, 15.68 MiB/s [2024-10-07T11:30:19.909Z] 4015.33 IOPS, 15.68 MiB/s [2024-10-07T11:30:20.844Z] 4018.86 IOPS, 15.70 MiB/s [2024-10-07T11:30:22.219Z] 4017.25 IOPS, 15.69 MiB/s [2024-10-07T11:30:22.785Z] 4023.33 IOPS, 15.72 MiB/s [2024-10-07T11:30:23.067Z] 4029.60 IOPS, 15.74 MiB/s 00:19:27.544 Latency(us) 00:19:27.544 [2024-10-07T11:30:23.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.544 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:27.544 Verification LBA range: start 0x0 length 0x2000 00:19:27.544 TLSTESTn1 : 10.02 4035.15 15.76 0.00 0.00 31660.01 6583.39 27286.81 00:19:27.544 [2024-10-07T11:30:23.067Z] =================================================================================================================== 00:19:27.544 [2024-10-07T11:30:23.067Z] Total : 4035.15 15.76 0.00 0.00 31660.01 6583.39 27286.81 00:19:27.544 { 00:19:27.544 "results": [ 00:19:27.544 { 00:19:27.544 "job": "TLSTESTn1", 00:19:27.544 "core_mask": "0x4", 00:19:27.544 "workload": "verify", 00:19:27.544 "status": "finished", 00:19:27.544 "verify_range": { 00:19:27.544 "start": 0, 00:19:27.544 "length": 8192 00:19:27.544 }, 00:19:27.544 "queue_depth": 128, 00:19:27.544 "io_size": 4096, 00:19:27.544 "runtime": 10.017465, 00:19:27.544 "iops": 4035.152605973667, 00:19:27.544 "mibps": 15.762314867084637, 00:19:27.544 "io_failed": 0, 00:19:27.544 "io_timeout": 0, 00:19:27.544 "avg_latency_us": 31660.008979448638, 00:19:27.544 "min_latency_us": 6583.389090909091, 00:19:27.544 "max_latency_us": 27286.807272727274 00:19:27.544 } 00:19:27.544 ], 00:19:27.544 "core_count": 1 00:19:27.544 } 00:19:27.544 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:27.544 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72580 00:19:27.544 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72580 ']' 00:19:27.544 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72580 00:19:27.544 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:27.544 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.544 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72580 00:19:27.544 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:27.544 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:27.544 killing process with pid 72580 00:19:27.544 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72580' 00:19:27.544 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72580 00:19:27.544 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.544 00:19:27.544 Latency(us) 00:19:27.544 [2024-10-07T11:30:23.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.544 [2024-10-07T11:30:23.067Z] =================================================================================================================== 00:19:27.544 [2024-10-07T11:30:23.067Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.544 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72580 00:19:27.803 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72548 00:19:27.803 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72548 ']' 00:19:27.803 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72548 00:19:27.803 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:27.803 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.803 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72548 00:19:27.803 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:27.803 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:27.803 killing process with pid 72548 00:19:27.803 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72548' 00:19:27.803 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72548 00:19:27.803 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72548 00:19:28.061 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:28.061 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:28.061 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:28.061 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.061 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72720 00:19:28.061 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:28.061 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72720 00:19:28.061 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72720 ']' 00:19:28.061 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.061 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.061 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.061 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.061 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.061 [2024-10-07 11:30:23.431413] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:19:28.061 [2024-10-07 11:30:23.431528] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.061 [2024-10-07 11:30:23.568026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.319 [2024-10-07 11:30:23.712890] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.320 [2024-10-07 11:30:23.712961] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.320 [2024-10-07 11:30:23.712976] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.320 [2024-10-07 11:30:23.712987] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.320 [2024-10-07 11:30:23.712997] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.320 [2024-10-07 11:30:23.713469] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.320 [2024-10-07 11:30:23.771190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:29.255 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.255 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:29.255 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:29.255 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:29.255 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.255 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.255 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.KMgiJFx5lT 00:19:29.255 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KMgiJFx5lT 00:19:29.255 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:29.513 [2024-10-07 11:30:24.915020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.513 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:29.771 11:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:19:30.337 [2024-10-07 11:30:25.575148] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:30.337 [2024-10-07 11:30:25.575430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:30.337 11:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:30.595 malloc0 00:19:30.595 11:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:30.854 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KMgiJFx5lT 00:19:31.112 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:31.373 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72787 00:19:31.373 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:31.373 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:31.373 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72787 /var/tmp/bdevperf.sock 00:19:31.373 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72787 ']' 00:19:31.373 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.373 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:31.373 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.373 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:31.373 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.632 [2024-10-07 11:30:26.946152] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:19:31.632 [2024-10-07 11:30:26.946311] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72787 ] 00:19:31.632 [2024-10-07 11:30:27.109511] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.891 [2024-10-07 11:30:27.234715] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.891 [2024-10-07 11:30:27.293958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:32.458 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.458 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:32.458 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KMgiJFx5lT 00:19:33.027 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:33.027 [2024-10-07 11:30:28.506178] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.287 nvme0n1 00:19:33.287 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:33.287 Running I/O for 1 seconds... 00:19:34.482 3975.00 IOPS, 15.53 MiB/s 00:19:34.482 Latency(us) 00:19:34.482 [2024-10-07T11:30:30.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.482 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:34.482 Verification LBA range: start 0x0 length 0x2000 00:19:34.482 nvme0n1 : 1.02 4039.50 15.78 0.00 0.00 31415.44 6732.33 23473.80 00:19:34.482 [2024-10-07T11:30:30.005Z] =================================================================================================================== 00:19:34.482 [2024-10-07T11:30:30.005Z] Total : 4039.50 15.78 0.00 0.00 31415.44 6732.33 23473.80 00:19:34.482 { 00:19:34.482 "results": [ 00:19:34.482 { 00:19:34.482 "job": "nvme0n1", 00:19:34.482 "core_mask": "0x2", 00:19:34.482 "workload": "verify", 00:19:34.482 "status": "finished", 00:19:34.482 "verify_range": { 00:19:34.482 "start": 0, 00:19:34.482 "length": 8192 00:19:34.482 }, 00:19:34.482 "queue_depth": 128, 00:19:34.482 "io_size": 4096, 00:19:34.482 "runtime": 1.015721, 00:19:34.482 "iops": 4039.495097571085, 00:19:34.482 "mibps": 15.77927772488705, 00:19:34.482 "io_failed": 0, 00:19:34.482 "io_timeout": 0, 00:19:34.482 "avg_latency_us": 31415.442408880423, 00:19:34.482 "min_latency_us": 6732.334545454545, 00:19:34.482 "max_latency_us": 23473.803636363635 00:19:34.482 } 00:19:34.482 ], 00:19:34.482 "core_count": 1 00:19:34.482 } 00:19:34.482 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72787 00:19:34.482 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72787 ']' 00:19:34.482 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72787 00:19:34.482 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:34.482 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:34.482 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72787 00:19:34.482 killing process with pid 72787 00:19:34.482 Received shutdown signal, test time was about 1.000000 seconds 00:19:34.482 00:19:34.482 Latency(us) 00:19:34.482 [2024-10-07T11:30:30.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.482 [2024-10-07T11:30:30.005Z] =================================================================================================================== 00:19:34.482 [2024-10-07T11:30:30.005Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:34.482 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:34.482 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:34.482 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72787' 00:19:34.482 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72787 00:19:34.482 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72787 00:19:34.740 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72720 00:19:34.740 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72720 ']' 00:19:34.740 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72720 00:19:34.741 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:34.741 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:34.741 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72720 00:19:34.741 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:34.741 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:34.741 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72720' 00:19:34.741 killing process with pid 72720 00:19:34.741 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72720 00:19:34.741 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72720 00:19:34.999 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:34.999 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:34.999 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.999 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.999 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72838 00:19:34.999 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:34.999 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72838 00:19:34.999 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72838 ']' 00:19:34.999 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.999 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.999 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.999 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.999 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.000 [2024-10-07 11:30:30.387972] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:19:35.000 [2024-10-07 11:30:30.388070] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.000 [2024-10-07 11:30:30.520603] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.258 [2024-10-07 11:30:30.638496] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.258 [2024-10-07 11:30:30.638566] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.258 [2024-10-07 11:30:30.638579] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.258 [2024-10-07 11:30:30.638588] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.258 [2024-10-07 11:30:30.638595] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.258 [2024-10-07 11:30:30.639011] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.258 [2024-10-07 11:30:30.692687] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:36.195 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:36.195 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:36.195 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:36.196 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:36.196 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.196 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.196 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:36.196 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.196 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.196 [2024-10-07 11:30:31.465734] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.196 malloc0 00:19:36.196 [2024-10-07 11:30:31.504562] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:36.196 [2024-10-07 11:30:31.504816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:36.196 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.196 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72870 00:19:36.196 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:36.196 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72870 /var/tmp/bdevperf.sock 00:19:36.196 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72870 ']' 00:19:36.196 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.196 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:36.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.196 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.196 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:36.196 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.196 [2024-10-07 11:30:31.584911] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:19:36.196 [2024-10-07 11:30:31.585011] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72870 ] 00:19:36.196 [2024-10-07 11:30:31.716912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.455 [2024-10-07 11:30:31.835055] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.455 [2024-10-07 11:30:31.889203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:37.390 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:37.390 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:37.390 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KMgiJFx5lT 00:19:37.648 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:37.905 [2024-10-07 11:30:33.254642] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.905 nvme0n1 00:19:37.905 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:38.164 Running I/O for 1 seconds... 00:19:39.097 3804.00 IOPS, 14.86 MiB/s 00:19:39.097 Latency(us) 00:19:39.097 [2024-10-07T11:30:34.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.097 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:39.097 Verification LBA range: start 0x0 length 0x2000 00:19:39.097 nvme0n1 : 1.02 3856.17 15.06 0.00 0.00 32829.83 7089.80 35270.28 00:19:39.097 [2024-10-07T11:30:34.620Z] =================================================================================================================== 00:19:39.097 [2024-10-07T11:30:34.620Z] Total : 3856.17 15.06 0.00 0.00 32829.83 7089.80 35270.28 00:19:39.097 { 00:19:39.097 "results": [ 00:19:39.097 { 00:19:39.097 "job": "nvme0n1", 00:19:39.097 "core_mask": "0x2", 00:19:39.097 "workload": "verify", 00:19:39.097 "status": "finished", 00:19:39.097 "verify_range": { 00:19:39.097 "start": 0, 00:19:39.097 "length": 8192 00:19:39.097 }, 00:19:39.097 "queue_depth": 128, 00:19:39.097 "io_size": 4096, 00:19:39.097 "runtime": 1.019664, 00:19:39.097 "iops": 3856.172229283372, 00:19:39.097 "mibps": 15.063172770638172, 00:19:39.097 "io_failed": 0, 00:19:39.097 "io_timeout": 0, 00:19:39.097 "avg_latency_us": 32829.832061407564, 00:19:39.097 "min_latency_us": 7089.8036363636365, 00:19:39.097 "max_latency_us": 35270.28363636364 00:19:39.097 } 00:19:39.097 ], 00:19:39.097 "core_count": 1 00:19:39.097 } 00:19:39.098 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:39.098 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.098 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.356 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.356 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:39.356 "subsystems": [ 00:19:39.356 { 00:19:39.356 "subsystem": "keyring", 00:19:39.356 "config": [ 00:19:39.356 { 00:19:39.356 "method": "keyring_file_add_key", 00:19:39.356 "params": { 00:19:39.356 "name": "key0", 00:19:39.356 "path": "/tmp/tmp.KMgiJFx5lT" 00:19:39.356 } 00:19:39.356 } 00:19:39.356 ] 00:19:39.356 }, 00:19:39.356 { 00:19:39.356 "subsystem": "iobuf", 00:19:39.356 "config": [ 00:19:39.356 { 00:19:39.356 "method": "iobuf_set_options", 00:19:39.356 "params": { 00:19:39.356 "small_pool_count": 8192, 00:19:39.356 "large_pool_count": 1024, 00:19:39.356 "small_bufsize": 8192, 00:19:39.356 "large_bufsize": 135168 00:19:39.356 } 00:19:39.356 } 00:19:39.356 ] 00:19:39.356 }, 00:19:39.356 { 00:19:39.356 "subsystem": "sock", 00:19:39.356 "config": [ 00:19:39.356 { 00:19:39.356 "method": "sock_set_default_impl", 00:19:39.356 "params": { 00:19:39.356 "impl_name": "uring" 00:19:39.356 } 00:19:39.356 }, 00:19:39.356 { 00:19:39.356 "method": "sock_impl_set_options", 00:19:39.356 "params": { 00:19:39.356 "impl_name": "ssl", 00:19:39.356 "recv_buf_size": 4096, 00:19:39.356 "send_buf_size": 4096, 00:19:39.356 "enable_recv_pipe": true, 00:19:39.356 "enable_quickack": false, 00:19:39.356 "enable_placement_id": 0, 00:19:39.356 "enable_zerocopy_send_server": true, 00:19:39.356 "enable_zerocopy_send_client": false, 00:19:39.356 "zerocopy_threshold": 0, 00:19:39.356 "tls_version": 0, 00:19:39.356 "enable_ktls": false 00:19:39.356 } 00:19:39.356 }, 00:19:39.356 { 00:19:39.356 "method": "sock_impl_set_options", 00:19:39.356 "params": { 00:19:39.356 "impl_name": "posix", 00:19:39.356 "recv_buf_size": 2097152, 00:19:39.356 "send_buf_size": 2097152, 00:19:39.356 "enable_recv_pipe": true, 00:19:39.356 "enable_quickack": false, 00:19:39.356 "enable_placement_id": 0, 00:19:39.356 "enable_zerocopy_send_server": true, 00:19:39.356 "enable_zerocopy_send_client": false, 00:19:39.356 "zerocopy_threshold": 0, 00:19:39.356 "tls_version": 0, 00:19:39.356 "enable_ktls": false 00:19:39.356 } 00:19:39.356 }, 00:19:39.356 { 00:19:39.356 "method": "sock_impl_set_options", 00:19:39.356 "params": { 00:19:39.356 "impl_name": "uring", 00:19:39.356 "recv_buf_size": 2097152, 00:19:39.356 "send_buf_size": 2097152, 00:19:39.356 "enable_recv_pipe": true, 00:19:39.356 "enable_quickack": false, 00:19:39.356 "enable_placement_id": 0, 00:19:39.356 "enable_zerocopy_send_server": false, 00:19:39.356 "enable_zerocopy_send_client": false, 00:19:39.356 "zerocopy_threshold": 0, 00:19:39.356 "tls_version": 0, 00:19:39.356 "enable_ktls": false 00:19:39.356 } 00:19:39.356 } 00:19:39.356 ] 00:19:39.356 }, 00:19:39.356 { 00:19:39.356 "subsystem": "vmd", 00:19:39.356 "config": [] 00:19:39.356 }, 00:19:39.356 { 00:19:39.356 "subsystem": "accel", 00:19:39.356 "config": [ 00:19:39.356 { 00:19:39.356 "method": "accel_set_options", 00:19:39.356 "params": { 00:19:39.356 "small_cache_size": 128, 00:19:39.356 "large_cache_size": 16, 00:19:39.356 "task_count": 2048, 00:19:39.356 "sequence_count": 2048, 00:19:39.356 "buf_count": 2048 00:19:39.356 } 00:19:39.356 } 00:19:39.356 ] 00:19:39.356 }, 00:19:39.356 { 00:19:39.356 "subsystem": "bdev", 00:19:39.356 "config": [ 00:19:39.356 { 00:19:39.356 "method": "bdev_set_options", 00:19:39.356 "params": { 00:19:39.356 "bdev_io_pool_size": 65535, 00:19:39.356 "bdev_io_cache_size": 256, 00:19:39.356 "bdev_auto_examine": true, 00:19:39.356 "iobuf_small_cache_size": 128, 00:19:39.356 "iobuf_large_cache_size": 16 00:19:39.356 } 00:19:39.356 }, 00:19:39.356 { 00:19:39.356 "method": "bdev_raid_set_options", 00:19:39.356 "params": { 00:19:39.356 "process_window_size_kb": 1024, 00:19:39.356 "process_max_bandwidth_mb_sec": 0 00:19:39.356 } 00:19:39.356 }, 00:19:39.356 { 00:19:39.356 "method": "bdev_iscsi_set_options", 00:19:39.356 "params": { 00:19:39.356 "timeout_sec": 30 00:19:39.356 } 00:19:39.356 }, 00:19:39.356 { 00:19:39.356 "method": "bdev_nvme_set_options", 00:19:39.356 "params": { 00:19:39.356 "action_on_timeout": "none", 00:19:39.356 "timeout_us": 0, 00:19:39.356 "timeout_admin_us": 0, 00:19:39.356 "keep_alive_timeout_ms": 10000, 00:19:39.356 "arbitration_burst": 0, 00:19:39.356 "low_priority_weight": 0, 00:19:39.356 "medium_priority_weight": 0, 00:19:39.356 "high_priority_weight": 0, 00:19:39.356 "nvme_adminq_poll_period_us": 10000, 00:19:39.356 "nvme_ioq_poll_period_us": 0, 00:19:39.357 "io_queue_requests": 0, 00:19:39.357 "delay_cmd_submit": true, 00:19:39.357 "transport_retry_count": 4, 00:19:39.357 "bdev_retry_count": 3, 00:19:39.357 "transport_ack_timeout": 0, 00:19:39.357 "ctrlr_loss_timeout_sec": 0, 00:19:39.357 "reconnect_delay_sec": 0, 00:19:39.357 "fast_io_fail_timeout_sec": 0, 00:19:39.357 "disable_auto_failback": false, 00:19:39.357 "generate_uuids": false, 00:19:39.357 "transport_tos": 0, 00:19:39.357 "nvme_error_stat": false, 00:19:39.357 "rdma_srq_size": 0, 00:19:39.357 "io_path_stat": false, 00:19:39.357 "allow_accel_sequence": false, 00:19:39.357 "rdma_max_cq_size": 0, 00:19:39.357 "rdma_cm_event_timeout_ms": 0, 00:19:39.357 "dhchap_digests": [ 00:19:39.357 "sha256", 00:19:39.357 "sha384", 00:19:39.357 "sha512" 00:19:39.357 ], 00:19:39.357 "dhchap_dhgroups": [ 00:19:39.357 "null", 00:19:39.357 "ffdhe2048", 00:19:39.357 "ffdhe3072", 00:19:39.357 "ffdhe4096", 00:19:39.357 "ffdhe6144", 00:19:39.357 "ffdhe8192" 00:19:39.357 ] 00:19:39.357 } 00:19:39.357 }, 00:19:39.357 { 00:19:39.357 "method": "bdev_nvme_set_hotplug", 00:19:39.357 "params": { 00:19:39.357 "period_us": 100000, 00:19:39.357 "enable": false 00:19:39.357 } 00:19:39.357 }, 00:19:39.357 { 00:19:39.357 "method": "bdev_malloc_create", 00:19:39.357 "params": { 00:19:39.357 "name": "malloc0", 00:19:39.357 "num_blocks": 8192, 00:19:39.357 "block_size": 4096, 00:19:39.357 "physical_block_size": 4096, 00:19:39.357 "uuid": "57ce558e-5a59-4394-b72f-58f06d9438b2", 00:19:39.357 "optimal_io_boundary": 0, 00:19:39.357 "md_size": 0, 00:19:39.357 "dif_type": 0, 00:19:39.357 "dif_is_head_of_md": false, 00:19:39.357 "dif_pi_format": 0 00:19:39.357 } 00:19:39.357 }, 00:19:39.357 { 00:19:39.357 "method": "bdev_wait_for_examine" 00:19:39.357 } 00:19:39.357 ] 00:19:39.357 }, 00:19:39.357 { 00:19:39.357 "subsystem": "nbd", 00:19:39.357 "config": [] 00:19:39.357 }, 00:19:39.357 { 00:19:39.357 "subsystem": "scheduler", 00:19:39.357 "config": [ 00:19:39.357 { 00:19:39.357 "method": "framework_set_scheduler", 00:19:39.357 "params": { 00:19:39.357 "name": "static" 00:19:39.357 } 00:19:39.357 } 00:19:39.357 ] 00:19:39.357 }, 00:19:39.357 { 00:19:39.357 "subsystem": "nvmf", 00:19:39.357 "config": [ 00:19:39.357 { 00:19:39.357 "method": "nvmf_set_config", 00:19:39.357 "params": { 00:19:39.357 "discovery_filter": "match_any", 00:19:39.357 "admin_cmd_passthru": { 00:19:39.357 "identify_ctrlr": false 00:19:39.357 }, 00:19:39.357 "dhchap_digests": [ 00:19:39.357 "sha256", 00:19:39.357 "sha384", 00:19:39.357 "sha512" 00:19:39.357 ], 00:19:39.357 "dhchap_dhgroups": [ 00:19:39.357 "null", 00:19:39.357 "ffdhe2048", 00:19:39.357 "ffdhe3072", 00:19:39.357 "ffdhe4096", 00:19:39.357 "ffdhe6144", 00:19:39.357 "ffdhe8192" 00:19:39.357 ] 00:19:39.357 } 00:19:39.357 }, 00:19:39.357 { 00:19:39.357 "method": "nvmf_set_max_subsystems", 00:19:39.357 "params": { 00:19:39.357 "max_subsystems": 1024 00:19:39.357 } 00:19:39.357 }, 00:19:39.357 { 00:19:39.357 "method": "nvmf_set_crdt", 00:19:39.357 "params": { 00:19:39.357 "crdt1": 0, 00:19:39.357 "crdt2": 0, 00:19:39.357 "crdt3": 0 00:19:39.357 } 00:19:39.357 }, 00:19:39.357 { 00:19:39.357 "method": "nvmf_create_transport", 00:19:39.357 "params": { 00:19:39.357 "trtype": "TCP", 00:19:39.357 "max_queue_depth": 128, 00:19:39.357 "max_io_qpairs_per_ctrlr": 127, 00:19:39.357 "in_capsule_data_size": 4096, 00:19:39.357 "max_io_size": 131072, 00:19:39.357 "io_unit_size": 131072, 00:19:39.357 "max_aq_depth": 128, 00:19:39.357 "num_shared_buffers": 511, 00:19:39.357 "buf_cache_size": 4294967295, 00:19:39.357 "dif_insert_or_strip": false, 00:19:39.357 "zcopy": false, 00:19:39.357 "c2h_success": false, 00:19:39.357 "sock_priority": 0, 00:19:39.357 "abort_timeout_sec": 1, 00:19:39.357 "ack_timeout": 0, 00:19:39.357 "data_wr_pool_size": 0 00:19:39.357 } 00:19:39.357 }, 00:19:39.357 { 00:19:39.357 "method": "nvmf_create_subsystem", 00:19:39.357 "params": { 00:19:39.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.357 "allow_any_host": false, 00:19:39.357 "serial_number": "00000000000000000000", 00:19:39.357 "model_number": "SPDK bdev Controller", 00:19:39.357 "max_namespaces": 32, 00:19:39.357 "min_cntlid": 1, 00:19:39.357 "max_cntlid": 65519, 00:19:39.357 "ana_reporting": false 00:19:39.357 } 00:19:39.357 }, 00:19:39.357 { 00:19:39.357 "method": "nvmf_subsystem_add_host", 00:19:39.357 "params": { 00:19:39.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.357 "host": "nqn.2016-06.io.spdk:host1", 00:19:39.357 "psk": "key0" 00:19:39.357 } 00:19:39.357 }, 00:19:39.357 { 00:19:39.357 "method": "nvmf_subsystem_add_ns", 00:19:39.357 "params": { 00:19:39.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.357 "namespace": { 00:19:39.357 "nsid": 1, 00:19:39.357 "bdev_name": "malloc0", 00:19:39.357 "nguid": "57CE558E5A594394B72F58F06D9438B2", 00:19:39.357 "uuid": "57ce558e-5a59-4394-b72f-58f06d9438b2", 00:19:39.357 "no_auto_visible": false 00:19:39.357 } 00:19:39.357 } 00:19:39.357 }, 00:19:39.357 { 00:19:39.357 "method": "nvmf_subsystem_add_listener", 00:19:39.357 "params": { 00:19:39.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.357 "listen_address": { 00:19:39.357 "trtype": "TCP", 00:19:39.357 "adrfam": "IPv4", 00:19:39.357 "traddr": "10.0.0.3", 00:19:39.357 "trsvcid": "4420" 00:19:39.357 }, 00:19:39.357 "secure_channel": false, 00:19:39.357 "sock_impl": "ssl" 00:19:39.357 } 00:19:39.357 } 00:19:39.357 ] 00:19:39.357 } 00:19:39.357 ] 00:19:39.357 }' 00:19:39.357 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:39.646 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:39.646 "subsystems": [ 00:19:39.646 { 00:19:39.646 "subsystem": "keyring", 00:19:39.646 "config": [ 00:19:39.646 { 00:19:39.646 "method": "keyring_file_add_key", 00:19:39.646 "params": { 00:19:39.646 "name": "key0", 00:19:39.646 "path": "/tmp/tmp.KMgiJFx5lT" 00:19:39.646 } 00:19:39.646 } 00:19:39.646 ] 00:19:39.646 }, 00:19:39.646 { 00:19:39.646 "subsystem": "iobuf", 00:19:39.646 "config": [ 00:19:39.646 { 00:19:39.646 "method": "iobuf_set_options", 00:19:39.646 "params": { 00:19:39.646 "small_pool_count": 8192, 00:19:39.646 "large_pool_count": 1024, 00:19:39.646 "small_bufsize": 8192, 00:19:39.646 "large_bufsize": 135168 00:19:39.646 } 00:19:39.646 } 00:19:39.646 ] 00:19:39.646 }, 00:19:39.646 { 00:19:39.646 "subsystem": "sock", 00:19:39.646 "config": [ 00:19:39.646 { 00:19:39.646 "method": "sock_set_default_impl", 00:19:39.646 "params": { 00:19:39.646 "impl_name": "uring" 00:19:39.646 } 00:19:39.646 }, 00:19:39.646 { 00:19:39.646 "method": "sock_impl_set_options", 00:19:39.646 "params": { 00:19:39.646 "impl_name": "ssl", 00:19:39.646 "recv_buf_size": 4096, 00:19:39.646 "send_buf_size": 4096, 00:19:39.646 "enable_recv_pipe": true, 00:19:39.646 "enable_quickack": false, 00:19:39.646 "enable_placement_id": 0, 00:19:39.646 "enable_zerocopy_send_server": true, 00:19:39.646 "enable_zerocopy_send_client": false, 00:19:39.646 "zerocopy_threshold": 0, 00:19:39.646 "tls_version": 0, 00:19:39.646 "enable_ktls": false 00:19:39.646 } 00:19:39.646 }, 00:19:39.646 { 00:19:39.646 "method": "sock_impl_set_options", 00:19:39.646 "params": { 00:19:39.646 "impl_name": "posix", 00:19:39.646 "recv_buf_size": 2097152, 00:19:39.646 "send_buf_size": 2097152, 00:19:39.646 "enable_recv_pipe": true, 00:19:39.646 "enable_quickack": false, 00:19:39.646 "enable_placement_id": 0, 00:19:39.646 "enable_zerocopy_send_server": true, 00:19:39.646 "enable_zerocopy_send_client": false, 00:19:39.646 "zerocopy_threshold": 0, 00:19:39.646 "tls_version": 0, 00:19:39.646 "enable_ktls": false 00:19:39.646 } 00:19:39.646 }, 00:19:39.646 { 00:19:39.646 "method": "sock_impl_set_options", 00:19:39.646 "params": { 00:19:39.646 "impl_name": "uring", 00:19:39.646 "recv_buf_size": 2097152, 00:19:39.646 "send_buf_size": 2097152, 00:19:39.646 "enable_recv_pipe": true, 00:19:39.646 "enable_quickack": false, 00:19:39.646 "enable_placement_id": 0, 00:19:39.646 "enable_zerocopy_send_server": false, 00:19:39.646 "enable_zerocopy_send_client": false, 00:19:39.646 "zerocopy_threshold": 0, 00:19:39.646 "tls_version": 0, 00:19:39.646 "enable_ktls": false 00:19:39.646 } 00:19:39.646 } 00:19:39.646 ] 00:19:39.646 }, 00:19:39.646 { 00:19:39.646 "subsystem": "vmd", 00:19:39.646 "config": [] 00:19:39.646 }, 00:19:39.646 { 00:19:39.646 "subsystem": "accel", 00:19:39.646 "config": [ 00:19:39.646 { 00:19:39.646 "method": "accel_set_options", 00:19:39.646 "params": { 00:19:39.646 "small_cache_size": 128, 00:19:39.646 "large_cache_size": 16, 00:19:39.646 "task_count": 2048, 00:19:39.646 "sequence_count": 2048, 00:19:39.646 "buf_count": 2048 00:19:39.646 } 00:19:39.646 } 00:19:39.646 ] 00:19:39.647 }, 00:19:39.647 { 00:19:39.647 "subsystem": "bdev", 00:19:39.647 "config": [ 00:19:39.647 { 00:19:39.647 "method": "bdev_set_options", 00:19:39.647 "params": { 00:19:39.647 "bdev_io_pool_size": 65535, 00:19:39.647 "bdev_io_cache_size": 256, 00:19:39.647 "bdev_auto_examine": true, 00:19:39.647 "iobuf_small_cache_size": 128, 00:19:39.647 "iobuf_large_cache_size": 16 00:19:39.647 } 00:19:39.647 }, 00:19:39.647 { 00:19:39.647 "method": "bdev_raid_set_options", 00:19:39.647 "params": { 00:19:39.647 "process_window_size_kb": 1024, 00:19:39.647 "process_max_bandwidth_mb_sec": 0 00:19:39.647 } 00:19:39.647 }, 00:19:39.647 { 00:19:39.647 "method": "bdev_iscsi_set_options", 00:19:39.647 "params": { 00:19:39.647 "timeout_sec": 30 00:19:39.647 } 00:19:39.647 }, 00:19:39.647 { 00:19:39.647 "method": "bdev_nvme_set_options", 00:19:39.647 "params": { 00:19:39.647 "action_on_timeout": "none", 00:19:39.647 "timeout_us": 0, 00:19:39.647 "timeout_admin_us": 0, 00:19:39.647 "keep_alive_timeout_ms": 10000, 00:19:39.647 "arbitration_burst": 0, 00:19:39.647 "low_priority_weight": 0, 00:19:39.647 "medium_priority_weight": 0, 00:19:39.647 "high_priority_weight": 0, 00:19:39.647 "nvme_adminq_poll_period_us": 10000, 00:19:39.647 "nvme_ioq_poll_period_us": 0, 00:19:39.647 "io_queue_requests": 512, 00:19:39.647 "delay_cmd_submit": true, 00:19:39.647 "transport_retry_count": 4, 00:19:39.647 "bdev_retry_count": 3, 00:19:39.647 "transport_ack_timeout": 0, 00:19:39.647 "ctrlr_loss_timeout_sec": 0, 00:19:39.647 "reconnect_delay_sec": 0, 00:19:39.647 "fast_io_fail_timeout_sec": 0, 00:19:39.647 "disable_auto_failback": false, 00:19:39.647 "generate_uuids": false, 00:19:39.647 "transport_tos": 0, 00:19:39.647 "nvme_error_stat": false, 00:19:39.647 "rdma_srq_size": 0, 00:19:39.647 "io_path_stat": false, 00:19:39.647 "allow_accel_sequence": false, 00:19:39.647 "rdma_max_cq_size": 0, 00:19:39.647 "rdma_cm_event_timeout_ms": 0, 00:19:39.647 "dhchap_digests": [ 00:19:39.647 "sha256", 00:19:39.647 "sha384", 00:19:39.647 "sha512" 00:19:39.647 ], 00:19:39.647 "dhchap_dhgroups": [ 00:19:39.647 "null", 00:19:39.647 "ffdhe2048", 00:19:39.647 "ffdhe3072", 00:19:39.647 "ffdhe4096", 00:19:39.647 "ffdhe6144", 00:19:39.647 "ffdhe8192" 00:19:39.647 ] 00:19:39.647 } 00:19:39.647 }, 00:19:39.647 { 00:19:39.647 "method": "bdev_nvme_attach_controller", 00:19:39.647 "params": { 00:19:39.647 "name": "nvme0", 00:19:39.647 "trtype": "TCP", 00:19:39.647 "adrfam": "IPv4", 00:19:39.647 "traddr": "10.0.0.3", 00:19:39.647 "trsvcid": "4420", 00:19:39.647 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.647 "prchk_reftag": false, 00:19:39.647 "prchk_guard": false, 00:19:39.647 "ctrlr_loss_timeout_sec": 0, 00:19:39.647 "reconnect_delay_sec": 0, 00:19:39.647 "fast_io_fail_timeout_sec": 0, 00:19:39.647 "psk": "key0", 00:19:39.647 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.647 "hdgst": false, 00:19:39.647 "ddgst": false, 00:19:39.647 "multipath": "multipath" 00:19:39.647 } 00:19:39.647 }, 00:19:39.647 { 00:19:39.647 "method": "bdev_nvme_set_hotplug", 00:19:39.647 "params": { 00:19:39.647 "period_us": 100000, 00:19:39.647 "enable": false 00:19:39.647 } 00:19:39.647 }, 00:19:39.647 { 00:19:39.647 "method": "bdev_enable_histogram", 00:19:39.647 "params": { 00:19:39.647 "name": "nvme0n1", 00:19:39.647 "enable": true 00:19:39.647 } 00:19:39.647 }, 00:19:39.647 { 00:19:39.647 "method": "bdev_wait_for_examine" 00:19:39.647 } 00:19:39.647 ] 00:19:39.647 }, 00:19:39.647 { 00:19:39.647 "subsystem": "nbd", 00:19:39.647 "config": [] 00:19:39.647 } 00:19:39.647 ] 00:19:39.647 }' 00:19:39.647 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72870 00:19:39.647 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72870 ']' 00:19:39.647 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72870 00:19:39.647 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:39.647 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:39.647 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72870 00:19:39.647 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:39.647 killing process with pid 72870 00:19:39.647 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:39.647 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72870' 00:19:39.647 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72870 00:19:39.647 Received shutdown signal, test time was about 1.000000 seconds 00:19:39.647 00:19:39.647 Latency(us) 00:19:39.647 [2024-10-07T11:30:35.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.647 [2024-10-07T11:30:35.170Z] =================================================================================================================== 00:19:39.647 [2024-10-07T11:30:35.170Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:39.647 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72870 00:19:39.906 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72838 00:19:39.907 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72838 ']' 00:19:39.907 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72838 00:19:39.907 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:39.907 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:39.907 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72838 00:19:39.907 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:39.907 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:39.907 killing process with pid 72838 00:19:39.907 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72838' 00:19:39.907 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72838 00:19:39.907 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72838 00:19:40.166 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:40.166 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:40.166 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:40.166 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:40.166 "subsystems": [ 00:19:40.166 { 00:19:40.166 "subsystem": "keyring", 00:19:40.166 "config": [ 00:19:40.166 { 00:19:40.166 "method": "keyring_file_add_key", 00:19:40.166 "params": { 00:19:40.166 "name": "key0", 00:19:40.166 "path": "/tmp/tmp.KMgiJFx5lT" 00:19:40.166 } 00:19:40.166 } 00:19:40.166 ] 00:19:40.166 }, 00:19:40.166 { 00:19:40.166 "subsystem": "iobuf", 00:19:40.166 "config": [ 00:19:40.166 { 00:19:40.166 "method": "iobuf_set_options", 00:19:40.166 "params": { 00:19:40.166 "small_pool_count": 8192, 00:19:40.166 "large_pool_count": 1024, 00:19:40.166 "small_bufsize": 8192, 00:19:40.166 "large_bufsize": 135168 00:19:40.166 } 00:19:40.166 } 00:19:40.166 ] 00:19:40.166 }, 00:19:40.166 { 00:19:40.166 "subsystem": "sock", 00:19:40.166 "config": [ 00:19:40.166 { 00:19:40.166 "method": "sock_set_default_impl", 00:19:40.166 "params": { 00:19:40.166 "impl_name": "uring" 00:19:40.166 } 00:19:40.166 }, 00:19:40.166 { 00:19:40.166 "method": "sock_impl_set_options", 00:19:40.166 "params": { 00:19:40.166 "impl_name": "ssl", 00:19:40.166 "recv_buf_size": 4096, 00:19:40.166 "send_buf_size": 4096, 00:19:40.166 "enable_recv_pipe": true, 00:19:40.166 "enable_quickack": false, 00:19:40.166 "enable_placement_id": 0, 00:19:40.166 "enable_zerocopy_send_server": true, 00:19:40.166 "enable_zerocopy_send_client": false, 00:19:40.166 "zerocopy_threshold": 0, 00:19:40.166 "tls_version": 0, 00:19:40.166 "enable_ktls": false 00:19:40.166 } 00:19:40.166 }, 00:19:40.166 { 00:19:40.166 "method": "sock_impl_set_options", 00:19:40.166 "params": { 00:19:40.166 "impl_name": "posix", 00:19:40.166 "recv_buf_size": 2097152, 00:19:40.166 "send_buf_size": 2097152, 00:19:40.166 "enable_recv_pipe": true, 00:19:40.166 "enable_quickack": false, 00:19:40.166 "enable_placement_id": 0, 00:19:40.166 "enable_zerocopy_send_server": true, 00:19:40.166 "enable_zerocopy_send_client": false, 00:19:40.166 "zerocopy_threshold": 0, 00:19:40.166 "tls_version": 0, 00:19:40.166 "enable_ktls": false 00:19:40.166 } 00:19:40.166 }, 00:19:40.166 { 00:19:40.166 "method": "sock_impl_set_options", 00:19:40.166 "params": { 00:19:40.166 "impl_name": "uring", 00:19:40.166 "recv_buf_size": 2097152, 00:19:40.166 "send_buf_size": 2097152, 00:19:40.166 "enable_recv_pipe": true, 00:19:40.166 "enable_quickack": false, 00:19:40.166 "enable_placement_id": 0, 00:19:40.166 "enable_zerocopy_send_server": false, 00:19:40.166 "enable_zerocopy_send_client": false, 00:19:40.166 "zerocopy_threshold": 0, 00:19:40.166 "tls_version": 0, 00:19:40.166 "enable_ktls": false 00:19:40.166 } 00:19:40.166 } 00:19:40.166 ] 00:19:40.166 }, 00:19:40.166 { 00:19:40.166 "subsystem": "vmd", 00:19:40.166 "config": [] 00:19:40.166 }, 00:19:40.166 { 00:19:40.166 "subsystem": "accel", 00:19:40.166 "config": [ 00:19:40.166 { 00:19:40.166 "method": "accel_set_options", 00:19:40.166 "params": { 00:19:40.166 "small_cache_size": 128, 00:19:40.166 "large_cache_size": 16, 00:19:40.166 "task_count": 2048, 00:19:40.166 "sequence_count": 2048, 00:19:40.166 "buf_count": 2048 00:19:40.166 } 00:19:40.166 } 00:19:40.166 ] 00:19:40.166 }, 00:19:40.166 { 00:19:40.166 "subsystem": "bdev", 00:19:40.166 "config": [ 00:19:40.166 { 00:19:40.166 "method": "bdev_set_options", 00:19:40.166 "params": { 00:19:40.166 "bdev_io_pool_size": 65535, 00:19:40.166 "bdev_io_cache_size": 256, 00:19:40.166 "bdev_auto_examine": true, 00:19:40.166 "iobuf_small_cache_size": 128, 00:19:40.166 "iobuf_large_cache_size": 16 00:19:40.166 } 00:19:40.166 }, 00:19:40.166 { 00:19:40.166 "method": "bdev_raid_set_options", 00:19:40.166 "params": { 00:19:40.166 "process_window_size_kb": 1024, 00:19:40.166 "process_max_bandwidth_mb_sec": 0 00:19:40.166 } 00:19:40.166 }, 00:19:40.166 { 00:19:40.166 "method": "bdev_iscsi_set_options", 00:19:40.166 "params": { 00:19:40.166 "timeout_sec": 30 00:19:40.166 } 00:19:40.166 }, 00:19:40.166 { 00:19:40.166 "method": "bdev_nvme_set_options", 00:19:40.166 "params": { 00:19:40.166 "action_on_timeout": "none", 00:19:40.166 "timeout_us": 0, 00:19:40.166 "timeout_admin_us": 0, 00:19:40.166 "keep_alive_timeout_ms": 10000, 00:19:40.166 "arbitration_burst": 0, 00:19:40.166 "low_priority_weight": 0, 00:19:40.166 "medium_priority_weight": 0, 00:19:40.166 "high_priority_weight": 0, 00:19:40.166 "nvme_adminq_poll_period_us": 10000, 00:19:40.166 "nvme_ioq_poll_period_us": 0, 00:19:40.166 "io_queue_requests": 0, 00:19:40.166 "delay_cmd_submit": true, 00:19:40.166 "transport_retry_count": 4, 00:19:40.166 "bdev_retry_count": 3, 00:19:40.166 "transport_ack_timeout": 0, 00:19:40.166 "ctrlr_loss_timeout_sec": 0, 00:19:40.166 "reconnect_delay_sec": 0, 00:19:40.166 "fast_io_fail_timeout_sec": 0, 00:19:40.166 "disable_auto_failback": false, 00:19:40.166 "generate_uuids": false, 00:19:40.166 "transport_tos": 0, 00:19:40.166 "nvme_error_stat": false, 00:19:40.166 "rdma_srq_size": 0, 00:19:40.166 "io_path_stat": false, 00:19:40.166 "allow_accel_sequence": false, 00:19:40.166 "rdma_max_cq_size": 0, 00:19:40.166 "rdma_cm_event_timeout_ms": 0, 00:19:40.166 "dhchap_digests": [ 00:19:40.166 "sha256", 00:19:40.166 "sha384", 00:19:40.166 "sha512" 00:19:40.166 ], 00:19:40.166 "dhchap_dhgroups": [ 00:19:40.166 "null", 00:19:40.166 "ffdhe2048", 00:19:40.166 "ffdhe3072", 00:19:40.166 "ffdhe4096", 00:19:40.166 "ffdhe6144", 00:19:40.166 "ffdhe8192" 00:19:40.166 ] 00:19:40.166 } 00:19:40.166 }, 00:19:40.166 { 00:19:40.166 "method": "bdev_nvme_set_hotplug", 00:19:40.166 "params": { 00:19:40.166 "period_us": 100000, 00:19:40.166 "enable": false 00:19:40.166 } 00:19:40.166 }, 00:19:40.166 { 00:19:40.166 "method": "bdev_malloc_create", 00:19:40.166 "params": { 00:19:40.166 "name": "malloc0", 00:19:40.166 "num_blocks": 8192, 00:19:40.166 "block_size": 4096, 00:19:40.166 "physical_block_size": 4096, 00:19:40.166 "uuid": "57ce558e-5a59-4394-b72f-58f06d9438b2", 00:19:40.166 "optimal_io_boundary": 0, 00:19:40.166 "md_size": 0, 00:19:40.166 "dif_type": 0, 00:19:40.166 "dif_is_head_of_md": false, 00:19:40.166 "dif_pi_format": 0 00:19:40.166 } 00:19:40.166 }, 00:19:40.166 { 00:19:40.167 "method": "bdev_wait_for_examine" 00:19:40.167 } 00:19:40.167 ] 00:19:40.167 }, 00:19:40.167 { 00:19:40.167 "subsystem": "nbd", 00:19:40.167 "config": [] 00:19:40.167 }, 00:19:40.167 { 00:19:40.167 "subsystem": "scheduler", 00:19:40.167 "config": [ 00:19:40.167 { 00:19:40.167 "method": "framework_set_scheduler", 00:19:40.167 "params": { 00:19:40.167 "name": "static" 00:19:40.167 } 00:19:40.167 } 00:19:40.167 ] 00:19:40.167 }, 00:19:40.167 { 00:19:40.167 "subsystem": "nvmf", 00:19:40.167 "config": [ 00:19:40.167 { 00:19:40.167 "method": "nvmf_set_config", 00:19:40.167 "params": { 00:19:40.167 "discovery_filter": "match_any", 00:19:40.167 "admin_cmd_passthru": { 00:19:40.167 "identify_ctrlr": false 00:19:40.167 }, 00:19:40.167 "dhchap_digests": [ 00:19:40.167 "sha256", 00:19:40.167 "sha384", 00:19:40.167 "sha512" 00:19:40.167 ], 00:19:40.167 "dhchap_dhgroups": [ 00:19:40.167 "null", 00:19:40.167 "ffdhe2048", 00:19:40.167 "ffdhe3072", 00:19:40.167 "ffdhe4096", 00:19:40.167 "ffdhe6144", 00:19:40.167 "ffdhe8192" 00:19:40.167 ] 00:19:40.167 } 00:19:40.167 }, 00:19:40.167 { 00:19:40.167 "method": "nvmf_set_max_subsystems", 00:19:40.167 "params": { 00:19:40.167 "max_subsystems": 1024 00:19:40.167 } 00:19:40.167 }, 00:19:40.167 { 00:19:40.167 "method": "nvmf_set_crdt", 00:19:40.167 "params": { 00:19:40.167 "crdt1": 0, 00:19:40.167 "crdt2": 0, 00:19:40.167 "crdt3": 0 00:19:40.167 } 00:19:40.167 }, 00:19:40.167 { 00:19:40.167 "method": "nvmf_create_transport", 00:19:40.167 "params": { 00:19:40.167 "trtype": "TCP", 00:19:40.167 "max_queue_depth": 128, 00:19:40.167 "max_io_qpairs_per_ctrlr": 127, 00:19:40.167 "in_capsule_data_size": 4096, 00:19:40.167 "max_io_size": 131072, 00:19:40.167 "io_unit_size": 131072, 00:19:40.167 "max_aq_depth": 128, 00:19:40.167 "num_shared_buffers": 511, 00:19:40.167 "buf_cache_size": 4294967295, 00:19:40.167 "dif_insert_or_strip": false, 00:19:40.167 "zcopy": false, 00:19:40.167 "c2h_success": false, 00:19:40.167 "sock_priority": 0, 00:19:40.167 "abort_timeout_sec": 1, 00:19:40.167 "ack_timeout": 0, 00:19:40.167 "data_wr_pool_size": 0 00:19:40.167 } 00:19:40.167 }, 00:19:40.167 { 00:19:40.167 "method": "nvmf_create_subsystem", 00:19:40.167 "params": { 00:19:40.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.167 "allow_any_host": false, 00:19:40.167 "serial_number": "00000000000000000000", 00:19:40.167 "model_number": "SPDK bdev Controller", 00:19:40.167 "max_namespaces": 32, 00:19:40.167 "min_cntlid": 1, 00:19:40.167 "max_cntlid": 65519, 00:19:40.167 "ana_reporting": false 00:19:40.167 } 00:19:40.167 }, 00:19:40.167 { 00:19:40.167 "method": "nvmf_subsystem_add_host", 00:19:40.167 "params": { 00:19:40.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.167 "host": "nqn.2016-06.io.spdk:host1", 00:19:40.167 "psk": "key0" 00:19:40.167 } 00:19:40.167 }, 00:19:40.167 { 00:19:40.167 "method": "nvmf_subsystem_add_ns", 00:19:40.167 "params": { 00:19:40.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.167 "namespace": { 00:19:40.167 "nsid": 1, 00:19:40.167 "bdev_name": "malloc0", 00:19:40.167 "nguid": "57CE558E5A594394B72F58F06D9438B2", 00:19:40.167 "uuid": "57ce558e-5a59-4394-b72f-58f06d9438b2", 00:19:40.167 "no_auto_visible": false 00:19:40.167 } 00:19:40.167 } 00:19:40.167 }, 00:19:40.167 { 00:19:40.167 "method": "nvmf_subsystem_add_listener", 00:19:40.167 "params": { 00:19:40.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.167 "listen_address": { 00:19:40.167 "trtype": "TCP", 00:19:40.167 "adrfam": "IPv4", 00:19:40.167 "traddr": "10.0.0.3", 00:19:40.167 "trsvcid": "4420" 00:19:40.167 }, 00:19:40.167 "secure_channel": false, 00:19:40.167 "sock_impl": "ssl" 00:19:40.167 } 00:19:40.167 } 00:19:40.167 ] 00:19:40.167 } 00:19:40.167 ] 00:19:40.167 }' 00:19:40.167 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.167 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72931 00:19:40.167 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:40.167 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72931 00:19:40.167 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72931 ']' 00:19:40.167 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.167 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.167 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.167 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.167 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.167 [2024-10-07 11:30:35.657301] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:19:40.167 [2024-10-07 11:30:35.657422] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.426 [2024-10-07 11:30:35.794625] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.426 [2024-10-07 11:30:35.907379] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.426 [2024-10-07 11:30:35.907612] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.426 [2024-10-07 11:30:35.907632] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.426 [2024-10-07 11:30:35.907641] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.426 [2024-10-07 11:30:35.907648] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.426 [2024-10-07 11:30:35.908108] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.684 [2024-10-07 11:30:36.076628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:40.684 [2024-10-07 11:30:36.156560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.684 [2024-10-07 11:30:36.194788] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:40.684 [2024-10-07 11:30:36.195100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:41.620 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.620 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:41.620 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:41.620 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:41.620 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.620 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.620 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72963 00:19:41.620 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72963 /var/tmp/bdevperf.sock 00:19:41.620 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72963 ']' 00:19:41.620 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.620 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:41.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.620 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:41.620 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.620 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:41.620 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.620 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:41.620 "subsystems": [ 00:19:41.620 { 00:19:41.620 "subsystem": "keyring", 00:19:41.620 "config": [ 00:19:41.620 { 00:19:41.620 "method": "keyring_file_add_key", 00:19:41.620 "params": { 00:19:41.620 "name": "key0", 00:19:41.620 "path": "/tmp/tmp.KMgiJFx5lT" 00:19:41.620 } 00:19:41.620 } 00:19:41.620 ] 00:19:41.620 }, 00:19:41.620 { 00:19:41.620 "subsystem": "iobuf", 00:19:41.620 "config": [ 00:19:41.620 { 00:19:41.620 "method": "iobuf_set_options", 00:19:41.620 "params": { 00:19:41.620 "small_pool_count": 8192, 00:19:41.620 "large_pool_count": 1024, 00:19:41.620 "small_bufsize": 8192, 00:19:41.620 "large_bufsize": 135168 00:19:41.620 } 00:19:41.620 } 00:19:41.620 ] 00:19:41.620 }, 00:19:41.620 { 00:19:41.620 "subsystem": "sock", 00:19:41.620 "config": [ 00:19:41.620 { 00:19:41.620 "method": "sock_set_default_impl", 00:19:41.620 "params": { 00:19:41.620 "impl_name": "uring" 00:19:41.620 } 00:19:41.620 }, 00:19:41.620 { 00:19:41.620 "method": "sock_impl_set_options", 00:19:41.620 "params": { 00:19:41.620 "impl_name": "ssl", 00:19:41.620 "recv_buf_size": 4096, 00:19:41.620 "send_buf_size": 4096, 00:19:41.620 "enable_recv_pipe": true, 00:19:41.620 "enable_quickack": false, 00:19:41.620 "enable_placement_id": 0, 00:19:41.620 "enable_zerocopy_send_server": true, 00:19:41.620 "enable_zerocopy_send_client": false, 00:19:41.620 "zerocopy_threshold": 0, 00:19:41.620 "tls_version": 0, 00:19:41.620 "enable_ktls": false 00:19:41.620 } 00:19:41.620 }, 00:19:41.620 { 00:19:41.620 "method": "sock_impl_set_options", 00:19:41.620 "params": { 00:19:41.620 "impl_name": "posix", 00:19:41.620 "recv_buf_size": 2097152, 00:19:41.620 "send_buf_size": 2097152, 00:19:41.620 "enable_recv_pipe": true, 00:19:41.620 "enable_quickack": false, 00:19:41.620 "enable_placement_id": 0, 00:19:41.620 "enable_zerocopy_send_server": true, 00:19:41.620 "enable_zerocopy_send_client": false, 00:19:41.620 "zerocopy_threshold": 0, 00:19:41.620 "tls_version": 0, 00:19:41.620 "enable_ktls": false 00:19:41.620 } 00:19:41.620 }, 00:19:41.620 { 00:19:41.620 "method": "sock_impl_set_options", 00:19:41.620 "params": { 00:19:41.620 "impl_name": "uring", 00:19:41.620 "recv_buf_size": 2097152, 00:19:41.620 "send_buf_size": 2097152, 00:19:41.620 "enable_recv_pipe": true, 00:19:41.620 "enable_quickack": false, 00:19:41.620 "enable_placement_id": 0, 00:19:41.620 "enable_zerocopy_send_server": false, 00:19:41.620 "enable_zerocopy_send_client": false, 00:19:41.620 "zerocopy_threshold": 0, 00:19:41.620 "tls_version": 0, 00:19:41.620 "enable_ktls": false 00:19:41.620 } 00:19:41.620 } 00:19:41.620 ] 00:19:41.620 }, 00:19:41.620 { 00:19:41.620 "subsystem": "vmd", 00:19:41.620 "config": [] 00:19:41.620 }, 00:19:41.620 { 00:19:41.620 "subsystem": "accel", 00:19:41.620 "config": [ 00:19:41.620 { 00:19:41.620 "method": "accel_set_options", 00:19:41.620 "params": { 00:19:41.620 "small_cache_size": 128, 00:19:41.620 "large_cache_size": 16, 00:19:41.620 "task_count": 2048, 00:19:41.620 "sequence_count": 2048, 00:19:41.620 "buf_count": 2048 00:19:41.620 } 00:19:41.620 } 00:19:41.620 ] 00:19:41.620 }, 00:19:41.620 { 00:19:41.620 "subsystem": "bdev", 00:19:41.620 "config": [ 00:19:41.620 { 00:19:41.620 "method": "bdev_set_options", 00:19:41.620 "params": { 00:19:41.620 "bdev_io_pool_size": 65535, 00:19:41.620 "bdev_io_cache_size": 256, 00:19:41.620 "bdev_auto_examine": true, 00:19:41.620 "iobuf_small_cache_size": 128, 00:19:41.620 "iobuf_large_cache_size": 16 00:19:41.620 } 00:19:41.620 }, 00:19:41.620 { 00:19:41.620 "method": "bdev_raid_set_options", 00:19:41.620 "params": { 00:19:41.620 "process_window_size_kb": 1024, 00:19:41.620 "process_max_bandwidth_mb_sec": 0 00:19:41.620 } 00:19:41.620 }, 00:19:41.620 { 00:19:41.620 "method": "bdev_iscsi_set_options", 00:19:41.620 "params": { 00:19:41.620 "timeout_sec": 30 00:19:41.620 } 00:19:41.620 }, 00:19:41.620 { 00:19:41.620 "method": "bdev_nvme_set_options", 00:19:41.620 "params": { 00:19:41.620 "action_on_timeout": "none", 00:19:41.620 "timeout_us": 0, 00:19:41.620 "timeout_admin_us": 0, 00:19:41.620 "keep_alive_timeout_ms": 10000, 00:19:41.620 "arbitration_burst": 0, 00:19:41.620 "low_priority_weight": 0, 00:19:41.620 "medium_priority_weight": 0, 00:19:41.620 "high_priority_weight": 0, 00:19:41.620 "nvme_adminq_poll_period_us": 10000, 00:19:41.620 "nvme_ioq_poll_period_us": 0, 00:19:41.620 "io_queue_requests": 512, 00:19:41.620 "delay_cmd_submit": true, 00:19:41.620 "transport_retry_count": 4, 00:19:41.620 "bdev_retry_count": 3, 00:19:41.620 "transport_ack_timeout": 0, 00:19:41.620 "ctrlr_loss_timeout_sec": 0, 00:19:41.620 "reconnect_delay_sec": 0, 00:19:41.621 "fast_io_fail_timeout_sec": 0, 00:19:41.621 "disable_auto_failback": false, 00:19:41.621 "generate_uuids": false, 00:19:41.621 "transport_tos": 0, 00:19:41.621 "nvme_error_stat": false, 00:19:41.621 "rdma_srq_size": 0, 00:19:41.621 "io_path_stat": false, 00:19:41.621 "allow_accel_sequence": false, 00:19:41.621 "rdma_max_cq_size": 0, 00:19:41.621 "rdma_cm_event_timeout_ms": 0, 00:19:41.621 "dhchap_digests": [ 00:19:41.621 "sha256", 00:19:41.621 "sha384", 00:19:41.621 "sha512" 00:19:41.621 ], 00:19:41.621 "dhchap_dhgroups": [ 00:19:41.621 "null", 00:19:41.621 "ffdhe2048", 00:19:41.621 "ffdhe3072", 00:19:41.621 "ffdhe4096", 00:19:41.621 "ffdhe6144", 00:19:41.621 "ffdhe8192" 00:19:41.621 ] 00:19:41.621 } 00:19:41.621 }, 00:19:41.621 { 00:19:41.621 "method": "bdev_nvme_attach_controller", 00:19:41.621 "params": { 00:19:41.621 "name": "nvme0", 00:19:41.621 "trtype": "TCP", 00:19:41.621 "adrfam": "IPv4", 00:19:41.621 "traddr": "10.0.0.3", 00:19:41.621 "trsvcid": "4420", 00:19:41.621 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.621 "prchk_reftag": false, 00:19:41.621 "prchk_guard": false, 00:19:41.621 "ctrlr_loss_timeout_sec": 0, 00:19:41.621 "reconnect_delay_sec": 0, 00:19:41.621 "fast_io_fail_timeout_sec": 0, 00:19:41.621 "psk": "key0", 00:19:41.621 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:41.621 "hdgst": false, 00:19:41.621 "ddgst": false, 00:19:41.621 "multipath": "multipath" 00:19:41.621 } 00:19:41.621 }, 00:19:41.621 { 00:19:41.621 "method": "bdev_nvme_set_hotplug", 00:19:41.621 "params": { 00:19:41.621 "period_us": 100000, 00:19:41.621 "enable": false 00:19:41.621 } 00:19:41.621 }, 00:19:41.621 { 00:19:41.621 "method": "bdev_enable_histogram", 00:19:41.621 "params": { 00:19:41.621 "name": "nvme0n1", 00:19:41.621 "enable": true 00:19:41.621 } 00:19:41.621 }, 00:19:41.621 { 00:19:41.621 "method": "bdev_wait_for_examine" 00:19:41.621 } 00:19:41.621 ] 00:19:41.621 }, 00:19:41.621 { 00:19:41.621 "subsystem": "nbd", 00:19:41.621 "config": [] 00:19:41.621 } 00:19:41.621 ] 00:19:41.621 }' 00:19:41.621 [2024-10-07 11:30:36.896718] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:19:41.621 [2024-10-07 11:30:36.896847] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72963 ] 00:19:41.621 [2024-10-07 11:30:37.034424] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.880 [2024-10-07 11:30:37.150864] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.880 [2024-10-07 11:30:37.286617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:41.880 [2024-10-07 11:30:37.336430] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.816 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:42.816 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:42.816 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:42.816 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:42.816 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.816 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:43.074 Running I/O for 1 seconds... 00:19:44.010 4093.00 IOPS, 15.99 MiB/s 00:19:44.010 Latency(us) 00:19:44.010 [2024-10-07T11:30:39.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.010 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:44.010 Verification LBA range: start 0x0 length 0x2000 00:19:44.010 nvme0n1 : 1.02 4154.42 16.23 0.00 0.00 30530.64 6315.29 22639.71 00:19:44.010 [2024-10-07T11:30:39.533Z] =================================================================================================================== 00:19:44.010 [2024-10-07T11:30:39.533Z] Total : 4154.42 16.23 0.00 0.00 30530.64 6315.29 22639.71 00:19:44.010 { 00:19:44.010 "results": [ 00:19:44.010 { 00:19:44.010 "job": "nvme0n1", 00:19:44.010 "core_mask": "0x2", 00:19:44.010 "workload": "verify", 00:19:44.010 "status": "finished", 00:19:44.010 "verify_range": { 00:19:44.010 "start": 0, 00:19:44.010 "length": 8192 00:19:44.010 }, 00:19:44.010 "queue_depth": 128, 00:19:44.010 "io_size": 4096, 00:19:44.010 "runtime": 1.016267, 00:19:44.010 "iops": 4154.42004906191, 00:19:44.010 "mibps": 16.228203316648084, 00:19:44.010 "io_failed": 0, 00:19:44.010 "io_timeout": 0, 00:19:44.010 "avg_latency_us": 30530.639596916586, 00:19:44.010 "min_latency_us": 6315.2872727272725, 00:19:44.010 "max_latency_us": 22639.70909090909 00:19:44.010 } 00:19:44.010 ], 00:19:44.010 "core_count": 1 00:19:44.010 } 00:19:44.010 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:44.010 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:44.010 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:44.010 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:19:44.010 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:19:44.010 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:19:44.010 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:44.010 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:19:44.010 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:19:44.010 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:19:44.010 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:44.010 nvmf_trace.0 00:19:44.269 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:19:44.269 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72963 00:19:44.269 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72963 ']' 00:19:44.269 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72963 00:19:44.269 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:44.269 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:44.269 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72963 00:19:44.269 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:44.269 killing process with pid 72963 00:19:44.269 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:44.269 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72963' 00:19:44.269 Received shutdown signal, test time was about 1.000000 seconds 00:19:44.269 00:19:44.269 Latency(us) 00:19:44.269 [2024-10-07T11:30:39.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.269 [2024-10-07T11:30:39.792Z] =================================================================================================================== 00:19:44.269 [2024-10-07T11:30:39.792Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:44.269 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72963 00:19:44.269 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72963 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:44.528 rmmod nvme_tcp 00:19:44.528 rmmod nvme_fabrics 00:19:44.528 rmmod nvme_keyring 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 72931 ']' 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 72931 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72931 ']' 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72931 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72931 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:44.528 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:44.528 killing process with pid 72931 00:19:44.529 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72931' 00:19:44.529 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72931 00:19:44.529 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72931 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:44.787 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:45.046 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:45.046 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:45.046 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:45.046 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.046 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:45.046 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.046 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:19:45.046 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.iDGeA6lLxk /tmp/tmp.EsKs7mHbZl /tmp/tmp.KMgiJFx5lT 00:19:45.046 00:19:45.046 real 1m33.740s 00:19:45.046 user 2m35.093s 00:19:45.046 sys 0m27.922s 00:19:45.046 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:45.046 ************************************ 00:19:45.046 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.046 END TEST nvmf_tls 00:19:45.046 ************************************ 00:19:45.046 11:30:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:45.046 11:30:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:45.046 11:30:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:45.046 11:30:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:45.046 ************************************ 00:19:45.046 START TEST nvmf_fips 00:19:45.046 ************************************ 00:19:45.046 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:45.046 * Looking for test storage... 00:19:45.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:19:45.046 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:45.046 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:19:45.046 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:45.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.306 --rc genhtml_branch_coverage=1 00:19:45.306 --rc genhtml_function_coverage=1 00:19:45.306 --rc genhtml_legend=1 00:19:45.306 --rc geninfo_all_blocks=1 00:19:45.306 --rc geninfo_unexecuted_blocks=1 00:19:45.306 00:19:45.306 ' 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:45.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.306 --rc genhtml_branch_coverage=1 00:19:45.306 --rc genhtml_function_coverage=1 00:19:45.306 --rc genhtml_legend=1 00:19:45.306 --rc geninfo_all_blocks=1 00:19:45.306 --rc geninfo_unexecuted_blocks=1 00:19:45.306 00:19:45.306 ' 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:45.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.306 --rc genhtml_branch_coverage=1 00:19:45.306 --rc genhtml_function_coverage=1 00:19:45.306 --rc genhtml_legend=1 00:19:45.306 --rc geninfo_all_blocks=1 00:19:45.306 --rc geninfo_unexecuted_blocks=1 00:19:45.306 00:19:45.306 ' 00:19:45.306 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:45.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.306 --rc genhtml_branch_coverage=1 00:19:45.306 --rc genhtml_function_coverage=1 00:19:45.306 --rc genhtml_legend=1 00:19:45.306 --rc geninfo_all_blocks=1 00:19:45.306 --rc geninfo_unexecuted_blocks=1 00:19:45.306 00:19:45.306 ' 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:45.307 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:45.307 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:19:45.308 Error setting digest 00:19:45.308 40E24DB0127F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:45.308 40E24DB0127F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@458 -- # nvmf_veth_init 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:45.308 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:45.568 Cannot find device "nvmf_init_br" 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:45.568 Cannot find device "nvmf_init_br2" 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:45.568 Cannot find device "nvmf_tgt_br" 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:45.568 Cannot find device "nvmf_tgt_br2" 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:45.568 Cannot find device "nvmf_init_br" 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:45.568 Cannot find device "nvmf_init_br2" 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:45.568 Cannot find device "nvmf_tgt_br" 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:45.568 Cannot find device "nvmf_tgt_br2" 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:45.568 Cannot find device "nvmf_br" 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:45.568 Cannot find device "nvmf_init_if" 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:45.568 Cannot find device "nvmf_init_if2" 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:45.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:45.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:45.568 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:45.568 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:45.568 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:45.568 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:45.568 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:45.568 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:45.568 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:45.568 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:45.568 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:45.568 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:45.568 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:45.568 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:45.568 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:45.568 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:45.827 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:45.827 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms 00:19:45.827 00:19:45.827 --- 10.0.0.3 ping statistics --- 00:19:45.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.827 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:45.827 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:45.827 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:19:45.827 00:19:45.827 --- 10.0.0.4 ping statistics --- 00:19:45.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.827 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:45.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:45.827 00:19:45.827 --- 10.0.0.1 ping statistics --- 00:19:45.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.827 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:45.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:19:45.827 00:19:45.827 --- 10.0.0.2 ping statistics --- 00:19:45.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.827 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # return 0 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=73292 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 73292 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73292 ']' 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:45.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:45.827 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:45.827 [2024-10-07 11:30:41.319313] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:19:45.827 [2024-10-07 11:30:41.319425] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.086 [2024-10-07 11:30:41.457139] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.086 [2024-10-07 11:30:41.586532] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.086 [2024-10-07 11:30:41.586604] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.086 [2024-10-07 11:30:41.586619] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.086 [2024-10-07 11:30:41.586630] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.086 [2024-10-07 11:30:41.586640] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.086 [2024-10-07 11:30:41.587253] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.345 [2024-10-07 11:30:41.646457] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:46.912 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:46.912 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:19:46.912 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:46.912 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:46.912 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:47.170 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.170 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:47.170 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:47.170 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:47.170 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.oaA 00:19:47.170 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:47.170 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.oaA 00:19:47.170 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.oaA 00:19:47.170 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.oaA 00:19:47.170 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:47.428 [2024-10-07 11:30:42.763211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.428 [2024-10-07 11:30:42.779144] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:47.428 [2024-10-07 11:30:42.779384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:47.428 malloc0 00:19:47.428 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:47.428 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73332 00:19:47.429 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:47.429 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73332 /var/tmp/bdevperf.sock 00:19:47.429 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73332 ']' 00:19:47.429 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.429 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:47.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.429 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.429 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:47.429 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:47.429 [2024-10-07 11:30:42.944521] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:19:47.429 [2024-10-07 11:30:42.944655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73332 ] 00:19:47.687 [2024-10-07 11:30:43.083413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.945 [2024-10-07 11:30:43.231297] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.945 [2024-10-07 11:30:43.284062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:48.513 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:48.513 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:19:48.513 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.oaA 00:19:48.771 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:49.030 [2024-10-07 11:30:44.542703] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.288 TLSTESTn1 00:19:49.288 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:49.288 Running I/O for 10 seconds... 00:19:51.603 4085.00 IOPS, 15.96 MiB/s [2024-10-07T11:30:48.060Z] 4088.50 IOPS, 15.97 MiB/s [2024-10-07T11:30:48.996Z] 4007.67 IOPS, 15.65 MiB/s [2024-10-07T11:30:49.930Z] 4027.00 IOPS, 15.73 MiB/s [2024-10-07T11:30:50.898Z] 4011.40 IOPS, 15.67 MiB/s [2024-10-07T11:30:51.834Z] 4001.50 IOPS, 15.63 MiB/s [2024-10-07T11:30:52.770Z] 4029.14 IOPS, 15.74 MiB/s [2024-10-07T11:30:54.145Z] 4049.75 IOPS, 15.82 MiB/s [2024-10-07T11:30:55.084Z] 4060.56 IOPS, 15.86 MiB/s [2024-10-07T11:30:55.084Z] 4071.50 IOPS, 15.90 MiB/s 00:19:59.561 Latency(us) 00:19:59.561 [2024-10-07T11:30:55.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.561 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:59.561 Verification LBA range: start 0x0 length 0x2000 00:19:59.561 TLSTESTn1 : 10.02 4077.72 15.93 0.00 0.00 31333.26 5481.19 36938.47 00:19:59.561 [2024-10-07T11:30:55.084Z] =================================================================================================================== 00:19:59.561 [2024-10-07T11:30:55.084Z] Total : 4077.72 15.93 0.00 0.00 31333.26 5481.19 36938.47 00:19:59.561 { 00:19:59.561 "results": [ 00:19:59.561 { 00:19:59.561 "job": "TLSTESTn1", 00:19:59.561 "core_mask": "0x4", 00:19:59.561 "workload": "verify", 00:19:59.561 "status": "finished", 00:19:59.561 "verify_range": { 00:19:59.561 "start": 0, 00:19:59.561 "length": 8192 00:19:59.561 }, 00:19:59.561 "queue_depth": 128, 00:19:59.561 "io_size": 4096, 00:19:59.561 "runtime": 10.015655, 00:19:59.561 "iops": 4077.7163350774363, 00:19:59.561 "mibps": 15.928579433896235, 00:19:59.561 "io_failed": 0, 00:19:59.561 "io_timeout": 0, 00:19:59.561 "avg_latency_us": 31333.26333403821, 00:19:59.561 "min_latency_us": 5481.192727272727, 00:19:59.561 "max_latency_us": 36938.472727272725 00:19:59.561 } 00:19:59.561 ], 00:19:59.561 "core_count": 1 00:19:59.562 } 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:59.562 nvmf_trace.0 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73332 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73332 ']' 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73332 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73332 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:59.562 killing process with pid 73332 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73332' 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73332 00:19:59.562 Received shutdown signal, test time was about 10.000000 seconds 00:19:59.562 00:19:59.562 Latency(us) 00:19:59.562 [2024-10-07T11:30:55.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.562 [2024-10-07T11:30:55.085Z] =================================================================================================================== 00:19:59.562 [2024-10-07T11:30:55.085Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.562 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73332 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:59.820 rmmod nvme_tcp 00:19:59.820 rmmod nvme_fabrics 00:19:59.820 rmmod nvme_keyring 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 73292 ']' 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 73292 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73292 ']' 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73292 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73292 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73292' 00:19:59.820 killing process with pid 73292 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73292 00:19:59.820 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73292 00:20:00.079 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:00.079 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:00.079 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:00.079 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:00.079 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:20:00.079 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:00.079 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:20:00.079 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:00.079 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:00.079 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:00.079 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:00.079 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:00.079 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:00.079 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:00.079 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:00.079 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:00.079 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:00.338 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:00.338 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:00.338 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:00.338 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:00.338 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:00.338 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:00.338 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.338 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.338 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.338 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:20:00.338 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.oaA 00:20:00.338 00:20:00.338 real 0m15.336s 00:20:00.338 user 0m21.565s 00:20:00.338 sys 0m5.857s 00:20:00.338 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:00.338 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:00.338 ************************************ 00:20:00.338 END TEST nvmf_fips 00:20:00.338 ************************************ 00:20:00.338 11:30:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:00.338 11:30:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:00.338 11:30:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:00.338 11:30:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:00.338 ************************************ 00:20:00.338 START TEST nvmf_control_msg_list 00:20:00.338 ************************************ 00:20:00.338 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:00.597 * Looking for test storage... 00:20:00.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.597 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:00.597 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:00.597 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:00.597 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:00.597 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:00.597 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:00.597 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:00.597 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:00.597 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:00.597 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:00.597 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:00.597 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:00.597 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:00.597 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:00.597 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:00.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.597 --rc genhtml_branch_coverage=1 00:20:00.597 --rc genhtml_function_coverage=1 00:20:00.597 --rc genhtml_legend=1 00:20:00.597 --rc geninfo_all_blocks=1 00:20:00.597 --rc geninfo_unexecuted_blocks=1 00:20:00.597 00:20:00.597 ' 00:20:00.597 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:00.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.597 --rc genhtml_branch_coverage=1 00:20:00.597 --rc genhtml_function_coverage=1 00:20:00.597 --rc genhtml_legend=1 00:20:00.597 --rc geninfo_all_blocks=1 00:20:00.597 --rc geninfo_unexecuted_blocks=1 00:20:00.597 00:20:00.597 ' 00:20:00.597 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:00.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.598 --rc genhtml_branch_coverage=1 00:20:00.598 --rc genhtml_function_coverage=1 00:20:00.598 --rc genhtml_legend=1 00:20:00.598 --rc geninfo_all_blocks=1 00:20:00.598 --rc geninfo_unexecuted_blocks=1 00:20:00.598 00:20:00.598 ' 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:00.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.598 --rc genhtml_branch_coverage=1 00:20:00.598 --rc genhtml_function_coverage=1 00:20:00.598 --rc genhtml_legend=1 00:20:00.598 --rc geninfo_all_blocks=1 00:20:00.598 --rc geninfo_unexecuted_blocks=1 00:20:00.598 00:20:00.598 ' 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:00.598 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@458 -- # nvmf_veth_init 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:00.598 Cannot find device "nvmf_init_br" 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:00.598 Cannot find device "nvmf_init_br2" 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:00.598 Cannot find device "nvmf_tgt_br" 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:00.598 Cannot find device "nvmf_tgt_br2" 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:00.598 Cannot find device "nvmf_init_br" 00:20:00.598 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:20:00.599 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:00.599 Cannot find device "nvmf_init_br2" 00:20:00.599 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:20:00.599 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:00.857 Cannot find device "nvmf_tgt_br" 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:00.857 Cannot find device "nvmf_tgt_br2" 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:00.857 Cannot find device "nvmf_br" 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:00.857 Cannot find device "nvmf_init_if" 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:00.857 Cannot find device "nvmf_init_if2" 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:00.857 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:00.857 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:00.857 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:01.116 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:01.116 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:20:01.116 00:20:01.116 --- 10.0.0.3 ping statistics --- 00:20:01.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.116 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:01.116 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:01.116 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:20:01.116 00:20:01.116 --- 10.0.0.4 ping statistics --- 00:20:01.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.116 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:01.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:20:01.116 00:20:01.116 --- 10.0.0.1 ping statistics --- 00:20:01.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.116 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:01.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:20:01.116 00:20:01.116 --- 10.0.0.2 ping statistics --- 00:20:01.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.116 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # return 0 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=73714 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 73714 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 73714 ']' 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.116 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:01.116 [2024-10-07 11:30:56.523359] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:20:01.116 [2024-10-07 11:30:56.523451] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.374 [2024-10-07 11:30:56.662232] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.374 [2024-10-07 11:30:56.787145] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.374 [2024-10-07 11:30:56.787217] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.374 [2024-10-07 11:30:56.787232] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.374 [2024-10-07 11:30:56.787243] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.374 [2024-10-07 11:30:56.787253] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.374 [2024-10-07 11:30:56.787734] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.374 [2024-10-07 11:30:56.845863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:01.633 [2024-10-07 11:30:56.960672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:01.633 Malloc0 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.633 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:01.633 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.633 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:01.633 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.633 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:01.633 [2024-10-07 11:30:57.014717] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:01.633 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.633 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73744 00:20:01.633 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:01.633 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73745 00:20:01.633 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:01.633 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73746 00:20:01.633 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73744 00:20:01.633 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:01.892 [2024-10-07 11:30:57.189566] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:01.892 [2024-10-07 11:30:57.189800] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:01.892 [2024-10-07 11:30:57.189985] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:02.827 Initializing NVMe Controllers 00:20:02.827 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:20:02.827 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:02.827 Initialization complete. Launching workers. 00:20:02.827 ======================================================== 00:20:02.827 Latency(us) 00:20:02.827 Device Information : IOPS MiB/s Average min max 00:20:02.827 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3085.00 12.05 323.78 220.06 753.60 00:20:02.827 ======================================================== 00:20:02.827 Total : 3085.00 12.05 323.78 220.06 753.60 00:20:02.827 00:20:02.827 Initializing NVMe Controllers 00:20:02.827 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:20:02.827 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:02.827 Initialization complete. Launching workers. 00:20:02.827 ======================================================== 00:20:02.827 Latency(us) 00:20:02.827 Device Information : IOPS MiB/s Average min max 00:20:02.827 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3073.00 12.00 325.01 221.10 743.90 00:20:02.827 ======================================================== 00:20:02.827 Total : 3073.00 12.00 325.01 221.10 743.90 00:20:02.827 00:20:02.827 Initializing NVMe Controllers 00:20:02.827 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:20:02.827 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:02.827 Initialization complete. Launching workers. 00:20:02.827 ======================================================== 00:20:02.827 Latency(us) 00:20:02.827 Device Information : IOPS MiB/s Average min max 00:20:02.827 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3084.00 12.05 323.86 217.51 741.25 00:20:02.827 ======================================================== 00:20:02.827 Total : 3084.00 12.05 323.86 217.51 741.25 00:20:02.827 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73745 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73746 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:02.827 rmmod nvme_tcp 00:20:02.827 rmmod nvme_fabrics 00:20:02.827 rmmod nvme_keyring 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 73714 ']' 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 73714 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 73714 ']' 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 73714 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:02.827 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73714 00:20:03.085 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:03.085 killing process with pid 73714 00:20:03.085 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:03.085 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73714' 00:20:03.085 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 73714 00:20:03.085 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 73714 00:20:03.085 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:03.085 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:03.085 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:03.085 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:03.085 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:20:03.085 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:03.085 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:20:03.344 00:20:03.344 real 0m3.007s 00:20:03.344 user 0m4.912s 00:20:03.344 sys 0m1.344s 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:03.344 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:03.344 ************************************ 00:20:03.344 END TEST nvmf_control_msg_list 00:20:03.344 ************************************ 00:20:03.603 11:30:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:03.603 11:30:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:03.603 11:30:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:03.603 11:30:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:03.603 ************************************ 00:20:03.603 START TEST nvmf_wait_for_buf 00:20:03.603 ************************************ 00:20:03.603 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:03.603 * Looking for test storage... 00:20:03.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:03.603 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:03.603 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:20:03.603 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:03.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.603 --rc genhtml_branch_coverage=1 00:20:03.603 --rc genhtml_function_coverage=1 00:20:03.603 --rc genhtml_legend=1 00:20:03.603 --rc geninfo_all_blocks=1 00:20:03.603 --rc geninfo_unexecuted_blocks=1 00:20:03.603 00:20:03.603 ' 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:03.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.603 --rc genhtml_branch_coverage=1 00:20:03.603 --rc genhtml_function_coverage=1 00:20:03.603 --rc genhtml_legend=1 00:20:03.603 --rc geninfo_all_blocks=1 00:20:03.603 --rc geninfo_unexecuted_blocks=1 00:20:03.603 00:20:03.603 ' 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:03.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.603 --rc genhtml_branch_coverage=1 00:20:03.603 --rc genhtml_function_coverage=1 00:20:03.603 --rc genhtml_legend=1 00:20:03.603 --rc geninfo_all_blocks=1 00:20:03.603 --rc geninfo_unexecuted_blocks=1 00:20:03.603 00:20:03.603 ' 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:03.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.603 --rc genhtml_branch_coverage=1 00:20:03.603 --rc genhtml_function_coverage=1 00:20:03.603 --rc genhtml_legend=1 00:20:03.603 --rc geninfo_all_blocks=1 00:20:03.603 --rc geninfo_unexecuted_blocks=1 00:20:03.603 00:20:03.603 ' 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:03.603 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:03.603 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:03.604 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.604 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:03.604 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:03.604 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:03.604 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.604 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.604 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.862 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:20:03.862 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:20:03.862 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:20:03.862 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:20:03.862 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:20:03.862 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@458 -- # nvmf_veth_init 00:20:03.862 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:03.862 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:03.862 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:03.862 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:03.862 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.862 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:03.862 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:03.862 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:03.862 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:03.862 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:03.863 Cannot find device "nvmf_init_br" 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:03.863 Cannot find device "nvmf_init_br2" 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:03.863 Cannot find device "nvmf_tgt_br" 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:03.863 Cannot find device "nvmf_tgt_br2" 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:03.863 Cannot find device "nvmf_init_br" 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:03.863 Cannot find device "nvmf_init_br2" 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:03.863 Cannot find device "nvmf_tgt_br" 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:03.863 Cannot find device "nvmf_tgt_br2" 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:03.863 Cannot find device "nvmf_br" 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:03.863 Cannot find device "nvmf_init_if" 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:03.863 Cannot find device "nvmf_init_if2" 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:03.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:03.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:03.863 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:04.121 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:04.121 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:04.121 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:04.121 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:04.121 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:04.121 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:04.121 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:04.121 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:04.121 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:04.121 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:04.121 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:04.121 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:04.121 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:04.121 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:04.121 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:04.122 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:04.122 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:20:04.122 00:20:04.122 --- 10.0.0.3 ping statistics --- 00:20:04.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.122 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:04.122 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:04.122 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:20:04.122 00:20:04.122 --- 10.0.0.4 ping statistics --- 00:20:04.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.122 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:04.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:04.122 00:20:04.122 --- 10.0.0.1 ping statistics --- 00:20:04.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.122 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:04.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:20:04.122 00:20:04.122 --- 10.0.0.2 ping statistics --- 00:20:04.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.122 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # return 0 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=73983 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 73983 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 73983 ']' 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:04.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:04.122 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:04.380 [2024-10-07 11:30:59.646161] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:20:04.380 [2024-10-07 11:30:59.646277] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.380 [2024-10-07 11:30:59.788522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.638 [2024-10-07 11:30:59.915139] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.638 [2024-10-07 11:30:59.915206] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.638 [2024-10-07 11:30:59.915222] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.638 [2024-10-07 11:30:59.915233] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.638 [2024-10-07 11:30:59.915242] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.638 [2024-10-07 11:30:59.915719] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:05.572 [2024-10-07 11:31:00.846523] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.572 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:05.572 Malloc0 00:20:05.573 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.573 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:05.573 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.573 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:05.573 [2024-10-07 11:31:00.908650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.573 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.573 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:05.573 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.573 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:05.573 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.573 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:05.573 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.573 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:05.573 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.573 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:05.573 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.573 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:05.573 [2024-10-07 11:31:00.932771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:05.573 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.573 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:05.831 [2024-10-07 11:31:01.118463] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:07.206 Initializing NVMe Controllers 00:20:07.206 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:20:07.206 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:07.206 Initialization complete. Launching workers. 00:20:07.206 ======================================================== 00:20:07.206 Latency(us) 00:20:07.206 Device Information : IOPS MiB/s Average min max 00:20:07.206 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.99 62.50 8000.13 7841.63 8337.37 00:20:07.206 ======================================================== 00:20:07.206 Total : 499.99 62.50 8000.13 7841.63 8337.37 00:20:07.206 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:07.207 rmmod nvme_tcp 00:20:07.207 rmmod nvme_fabrics 00:20:07.207 rmmod nvme_keyring 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 73983 ']' 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 73983 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 73983 ']' 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 73983 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73983 00:20:07.207 killing process with pid 73983 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73983' 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 73983 00:20:07.207 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 73983 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:07.467 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:07.740 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:07.740 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:07.740 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:07.740 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.740 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.740 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.740 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:20:07.740 00:20:07.740 real 0m4.188s 00:20:07.740 user 0m3.736s 00:20:07.740 sys 0m0.890s 00:20:07.740 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:07.740 ************************************ 00:20:07.740 END TEST nvmf_wait_for_buf 00:20:07.740 ************************************ 00:20:07.740 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:07.740 11:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:07.740 11:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:20:07.740 11:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:20:07.740 00:20:07.740 real 5m20.429s 00:20:07.740 user 11m14.559s 00:20:07.740 sys 1m8.681s 00:20:07.740 11:31:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:07.740 11:31:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:07.740 ************************************ 00:20:07.740 END TEST nvmf_target_extra 00:20:07.740 ************************************ 00:20:07.740 11:31:03 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:07.740 11:31:03 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:07.740 11:31:03 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:07.740 11:31:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:07.740 ************************************ 00:20:07.740 START TEST nvmf_host 00:20:07.740 ************************************ 00:20:07.740 11:31:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:07.740 * Looking for test storage... 00:20:07.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:20:07.740 11:31:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:07.740 11:31:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:20:07.740 11:31:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:07.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.998 --rc genhtml_branch_coverage=1 00:20:07.998 --rc genhtml_function_coverage=1 00:20:07.998 --rc genhtml_legend=1 00:20:07.998 --rc geninfo_all_blocks=1 00:20:07.998 --rc geninfo_unexecuted_blocks=1 00:20:07.998 00:20:07.998 ' 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:07.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.998 --rc genhtml_branch_coverage=1 00:20:07.998 --rc genhtml_function_coverage=1 00:20:07.998 --rc genhtml_legend=1 00:20:07.998 --rc geninfo_all_blocks=1 00:20:07.998 --rc geninfo_unexecuted_blocks=1 00:20:07.998 00:20:07.998 ' 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:07.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.998 --rc genhtml_branch_coverage=1 00:20:07.998 --rc genhtml_function_coverage=1 00:20:07.998 --rc genhtml_legend=1 00:20:07.998 --rc geninfo_all_blocks=1 00:20:07.998 --rc geninfo_unexecuted_blocks=1 00:20:07.998 00:20:07.998 ' 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:07.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.998 --rc genhtml_branch_coverage=1 00:20:07.998 --rc genhtml_function_coverage=1 00:20:07.998 --rc genhtml_legend=1 00:20:07.998 --rc geninfo_all_blocks=1 00:20:07.998 --rc geninfo_unexecuted_blocks=1 00:20:07.998 00:20:07.998 ' 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:07.998 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:07.999 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.999 ************************************ 00:20:07.999 START TEST nvmf_identify 00:20:07.999 ************************************ 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:07.999 * Looking for test storage... 00:20:07.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:07.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.999 --rc genhtml_branch_coverage=1 00:20:07.999 --rc genhtml_function_coverage=1 00:20:07.999 --rc genhtml_legend=1 00:20:07.999 --rc geninfo_all_blocks=1 00:20:07.999 --rc geninfo_unexecuted_blocks=1 00:20:07.999 00:20:07.999 ' 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:07.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.999 --rc genhtml_branch_coverage=1 00:20:07.999 --rc genhtml_function_coverage=1 00:20:07.999 --rc genhtml_legend=1 00:20:07.999 --rc geninfo_all_blocks=1 00:20:07.999 --rc geninfo_unexecuted_blocks=1 00:20:07.999 00:20:07.999 ' 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:07.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.999 --rc genhtml_branch_coverage=1 00:20:07.999 --rc genhtml_function_coverage=1 00:20:07.999 --rc genhtml_legend=1 00:20:07.999 --rc geninfo_all_blocks=1 00:20:07.999 --rc geninfo_unexecuted_blocks=1 00:20:07.999 00:20:07.999 ' 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:07.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.999 --rc genhtml_branch_coverage=1 00:20:07.999 --rc genhtml_function_coverage=1 00:20:07.999 --rc genhtml_legend=1 00:20:07.999 --rc geninfo_all_blocks=1 00:20:07.999 --rc geninfo_unexecuted_blocks=1 00:20:07.999 00:20:07.999 ' 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.999 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.257 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:20:08.257 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:20:08.257 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.257 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.257 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:08.257 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:08.257 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:08.257 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:20:08.257 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.257 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.257 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.257 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.257 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.257 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.257 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:08.258 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # nvmf_veth_init 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:08.258 Cannot find device "nvmf_init_br" 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:08.258 Cannot find device "nvmf_init_br2" 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:08.258 Cannot find device "nvmf_tgt_br" 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:08.258 Cannot find device "nvmf_tgt_br2" 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:08.258 Cannot find device "nvmf_init_br" 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:08.258 Cannot find device "nvmf_init_br2" 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:08.258 Cannot find device "nvmf_tgt_br" 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:08.258 Cannot find device "nvmf_tgt_br2" 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:08.258 Cannot find device "nvmf_br" 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:08.258 Cannot find device "nvmf_init_if" 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:08.258 Cannot find device "nvmf_init_if2" 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:08.258 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:08.258 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:08.258 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:08.517 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:08.517 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:20:08.517 00:20:08.517 --- 10.0.0.3 ping statistics --- 00:20:08.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.517 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:08.517 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:08.517 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:20:08.517 00:20:08.517 --- 10.0.0.4 ping statistics --- 00:20:08.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.517 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:08.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:08.517 00:20:08.517 --- 10.0.0.1 ping statistics --- 00:20:08.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.517 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:08.517 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:08.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:20:08.517 00:20:08.517 --- 10.0.0.2 ping statistics --- 00:20:08.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.517 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # return 0 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74306 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74306 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 74306 ']' 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:08.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:08.518 11:31:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:08.518 [2024-10-07 11:31:03.988224] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:20:08.518 [2024-10-07 11:31:03.988336] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.776 [2024-10-07 11:31:04.134776] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:08.776 [2024-10-07 11:31:04.253756] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.776 [2024-10-07 11:31:04.254028] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.776 [2024-10-07 11:31:04.254181] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.776 [2024-10-07 11:31:04.254243] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.776 [2024-10-07 11:31:04.254377] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.776 [2024-10-07 11:31:04.255663] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.776 [2024-10-07 11:31:04.255802] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.776 [2024-10-07 11:31:04.256466] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:20:08.776 [2024-10-07 11:31:04.256480] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.034 [2024-10-07 11:31:04.311112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:09.600 11:31:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:09.600 11:31:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:20:09.600 11:31:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:09.600 11:31:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.600 11:31:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:09.600 [2024-10-07 11:31:04.963106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.600 11:31:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.600 11:31:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:09.600 11:31:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:09.600 11:31:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:09.600 Malloc0 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:09.600 [2024-10-07 11:31:05.059074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:09.600 [ 00:20:09.600 { 00:20:09.600 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:09.600 "subtype": "Discovery", 00:20:09.600 "listen_addresses": [ 00:20:09.600 { 00:20:09.600 "trtype": "TCP", 00:20:09.600 "adrfam": "IPv4", 00:20:09.600 "traddr": "10.0.0.3", 00:20:09.600 "trsvcid": "4420" 00:20:09.600 } 00:20:09.600 ], 00:20:09.600 "allow_any_host": true, 00:20:09.600 "hosts": [] 00:20:09.600 }, 00:20:09.600 { 00:20:09.600 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.600 "subtype": "NVMe", 00:20:09.600 "listen_addresses": [ 00:20:09.600 { 00:20:09.600 "trtype": "TCP", 00:20:09.600 "adrfam": "IPv4", 00:20:09.600 "traddr": "10.0.0.3", 00:20:09.600 "trsvcid": "4420" 00:20:09.600 } 00:20:09.600 ], 00:20:09.600 "allow_any_host": true, 00:20:09.600 "hosts": [], 00:20:09.600 "serial_number": "SPDK00000000000001", 00:20:09.600 "model_number": "SPDK bdev Controller", 00:20:09.600 "max_namespaces": 32, 00:20:09.600 "min_cntlid": 1, 00:20:09.600 "max_cntlid": 65519, 00:20:09.600 "namespaces": [ 00:20:09.600 { 00:20:09.600 "nsid": 1, 00:20:09.600 "bdev_name": "Malloc0", 00:20:09.600 "name": "Malloc0", 00:20:09.600 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:09.600 "eui64": "ABCDEF0123456789", 00:20:09.600 "uuid": "508e0c38-a02c-4601-afb4-8f78c29769ab" 00:20:09.600 } 00:20:09.600 ] 00:20:09.600 } 00:20:09.600 ] 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.600 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:09.600 [2024-10-07 11:31:05.119761] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:20:09.600 [2024-10-07 11:31:05.119815] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74341 ] 00:20:09.860 [2024-10-07 11:31:05.257693] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:09.860 [2024-10-07 11:31:05.257790] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:09.860 [2024-10-07 11:31:05.257797] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:09.860 [2024-10-07 11:31:05.257812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:09.860 [2024-10-07 11:31:05.257824] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:09.860 [2024-10-07 11:31:05.258197] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:09.860 [2024-10-07 11:31:05.258272] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15ba750 0 00:20:09.860 [2024-10-07 11:31:05.262343] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:09.860 [2024-10-07 11:31:05.262442] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:09.860 [2024-10-07 11:31:05.262448] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:09.860 [2024-10-07 11:31:05.262452] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:09.860 [2024-10-07 11:31:05.262495] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.860 [2024-10-07 11:31:05.262502] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.860 [2024-10-07 11:31:05.262507] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ba750) 00:20:09.860 [2024-10-07 11:31:05.262524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:09.860 [2024-10-07 11:31:05.262560] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161e840, cid 0, qid 0 00:20:09.860 [2024-10-07 11:31:05.270352] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.860 [2024-10-07 11:31:05.270389] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.860 [2024-10-07 11:31:05.270395] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.860 [2024-10-07 11:31:05.270401] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161e840) on tqpair=0x15ba750 00:20:09.860 [2024-10-07 11:31:05.270418] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:09.860 [2024-10-07 11:31:05.270429] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:09.860 [2024-10-07 11:31:05.270436] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:09.860 [2024-10-07 11:31:05.270458] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.860 [2024-10-07 11:31:05.270464] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.860 [2024-10-07 11:31:05.270468] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ba750) 00:20:09.860 [2024-10-07 11:31:05.270481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.860 [2024-10-07 11:31:05.270513] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161e840, cid 0, qid 0 00:20:09.860 [2024-10-07 11:31:05.270600] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.860 [2024-10-07 11:31:05.270607] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.860 [2024-10-07 11:31:05.270611] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.860 [2024-10-07 11:31:05.270616] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161e840) on tqpair=0x15ba750 00:20:09.860 [2024-10-07 11:31:05.270622] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:09.860 [2024-10-07 11:31:05.270629] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:09.860 [2024-10-07 11:31:05.270638] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.860 [2024-10-07 11:31:05.270642] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.860 [2024-10-07 11:31:05.270647] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ba750) 00:20:09.860 [2024-10-07 11:31:05.270655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.860 [2024-10-07 11:31:05.270675] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161e840, cid 0, qid 0 00:20:09.860 [2024-10-07 11:31:05.270742] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.860 [2024-10-07 11:31:05.270749] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.860 [2024-10-07 11:31:05.270752] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.860 [2024-10-07 11:31:05.270757] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161e840) on tqpair=0x15ba750 00:20:09.860 [2024-10-07 11:31:05.270763] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:09.860 [2024-10-07 11:31:05.270772] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:09.860 [2024-10-07 11:31:05.270780] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.270784] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.270788] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ba750) 00:20:09.861 [2024-10-07 11:31:05.270796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.861 [2024-10-07 11:31:05.270814] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161e840, cid 0, qid 0 00:20:09.861 [2024-10-07 11:31:05.270881] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.861 [2024-10-07 11:31:05.270888] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.861 [2024-10-07 11:31:05.270892] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.270896] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161e840) on tqpair=0x15ba750 00:20:09.861 [2024-10-07 11:31:05.270902] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:09.861 [2024-10-07 11:31:05.270913] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.270917] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.270921] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ba750) 00:20:09.861 [2024-10-07 11:31:05.270929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.861 [2024-10-07 11:31:05.270946] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161e840, cid 0, qid 0 00:20:09.861 [2024-10-07 11:31:05.271008] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.861 [2024-10-07 11:31:05.271015] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.861 [2024-10-07 11:31:05.271019] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271023] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161e840) on tqpair=0x15ba750 00:20:09.861 [2024-10-07 11:31:05.271028] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:09.861 [2024-10-07 11:31:05.271034] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:09.861 [2024-10-07 11:31:05.271042] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:09.861 [2024-10-07 11:31:05.271148] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:09.861 [2024-10-07 11:31:05.271162] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:09.861 [2024-10-07 11:31:05.271173] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271178] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271182] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ba750) 00:20:09.861 [2024-10-07 11:31:05.271190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.861 [2024-10-07 11:31:05.271211] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161e840, cid 0, qid 0 00:20:09.861 [2024-10-07 11:31:05.271280] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.861 [2024-10-07 11:31:05.271287] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.861 [2024-10-07 11:31:05.271291] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271296] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161e840) on tqpair=0x15ba750 00:20:09.861 [2024-10-07 11:31:05.271301] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:09.861 [2024-10-07 11:31:05.271312] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271329] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271334] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ba750) 00:20:09.861 [2024-10-07 11:31:05.271342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.861 [2024-10-07 11:31:05.271363] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161e840, cid 0, qid 0 00:20:09.861 [2024-10-07 11:31:05.271426] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.861 [2024-10-07 11:31:05.271433] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.861 [2024-10-07 11:31:05.271437] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271441] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161e840) on tqpair=0x15ba750 00:20:09.861 [2024-10-07 11:31:05.271446] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:09.861 [2024-10-07 11:31:05.271453] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:09.861 [2024-10-07 11:31:05.271461] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:09.861 [2024-10-07 11:31:05.271478] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:09.861 [2024-10-07 11:31:05.271491] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271496] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ba750) 00:20:09.861 [2024-10-07 11:31:05.271504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.861 [2024-10-07 11:31:05.271523] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161e840, cid 0, qid 0 00:20:09.861 [2024-10-07 11:31:05.271652] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:09.861 [2024-10-07 11:31:05.271660] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:09.861 [2024-10-07 11:31:05.271664] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271668] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15ba750): datao=0, datal=4096, cccid=0 00:20:09.861 [2024-10-07 11:31:05.271674] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x161e840) on tqpair(0x15ba750): expected_datao=0, payload_size=4096 00:20:09.861 [2024-10-07 11:31:05.271679] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271689] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271694] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271703] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.861 [2024-10-07 11:31:05.271711] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.861 [2024-10-07 11:31:05.271714] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271718] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161e840) on tqpair=0x15ba750 00:20:09.861 [2024-10-07 11:31:05.271729] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:09.861 [2024-10-07 11:31:05.271734] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:09.861 [2024-10-07 11:31:05.271739] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:09.861 [2024-10-07 11:31:05.271745] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:09.861 [2024-10-07 11:31:05.271750] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:09.861 [2024-10-07 11:31:05.271755] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:09.861 [2024-10-07 11:31:05.271764] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:09.861 [2024-10-07 11:31:05.271778] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271783] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271787] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ba750) 00:20:09.861 [2024-10-07 11:31:05.271796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:09.861 [2024-10-07 11:31:05.271817] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161e840, cid 0, qid 0 00:20:09.861 [2024-10-07 11:31:05.271890] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.861 [2024-10-07 11:31:05.271897] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.861 [2024-10-07 11:31:05.271901] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271905] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161e840) on tqpair=0x15ba750 00:20:09.861 [2024-10-07 11:31:05.271914] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271919] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271923] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ba750) 00:20:09.861 [2024-10-07 11:31:05.271930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.861 [2024-10-07 11:31:05.271937] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271941] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271945] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15ba750) 00:20:09.861 [2024-10-07 11:31:05.271951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.861 [2024-10-07 11:31:05.271958] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271962] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271966] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15ba750) 00:20:09.861 [2024-10-07 11:31:05.271972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.861 [2024-10-07 11:31:05.271979] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271983] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.271987] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ba750) 00:20:09.861 [2024-10-07 11:31:05.271993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.861 [2024-10-07 11:31:05.271999] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:09.861 [2024-10-07 11:31:05.272013] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:09.861 [2024-10-07 11:31:05.272022] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.861 [2024-10-07 11:31:05.272026] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15ba750) 00:20:09.861 [2024-10-07 11:31:05.272033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.862 [2024-10-07 11:31:05.272054] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161e840, cid 0, qid 0 00:20:09.862 [2024-10-07 11:31:05.272062] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161e9c0, cid 1, qid 0 00:20:09.862 [2024-10-07 11:31:05.272067] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161eb40, cid 2, qid 0 00:20:09.862 [2024-10-07 11:31:05.272072] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161ecc0, cid 3, qid 0 00:20:09.862 [2024-10-07 11:31:05.272077] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161ee40, cid 4, qid 0 00:20:09.862 [2024-10-07 11:31:05.272185] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.862 [2024-10-07 11:31:05.272201] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.862 [2024-10-07 11:31:05.272206] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272211] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161ee40) on tqpair=0x15ba750 00:20:09.862 [2024-10-07 11:31:05.272217] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:09.862 [2024-10-07 11:31:05.272223] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:09.862 [2024-10-07 11:31:05.272235] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272240] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15ba750) 00:20:09.862 [2024-10-07 11:31:05.272248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.862 [2024-10-07 11:31:05.272268] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161ee40, cid 4, qid 0 00:20:09.862 [2024-10-07 11:31:05.272353] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:09.862 [2024-10-07 11:31:05.272362] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:09.862 [2024-10-07 11:31:05.272366] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272370] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15ba750): datao=0, datal=4096, cccid=4 00:20:09.862 [2024-10-07 11:31:05.272380] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x161ee40) on tqpair(0x15ba750): expected_datao=0, payload_size=4096 00:20:09.862 [2024-10-07 11:31:05.272385] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272393] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272398] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272410] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.862 [2024-10-07 11:31:05.272417] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.862 [2024-10-07 11:31:05.272421] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272425] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161ee40) on tqpair=0x15ba750 00:20:09.862 [2024-10-07 11:31:05.272440] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:09.862 [2024-10-07 11:31:05.272474] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272480] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15ba750) 00:20:09.862 [2024-10-07 11:31:05.272488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.862 [2024-10-07 11:31:05.272497] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272501] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272505] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15ba750) 00:20:09.862 [2024-10-07 11:31:05.272511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.862 [2024-10-07 11:31:05.272538] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161ee40, cid 4, qid 0 00:20:09.862 [2024-10-07 11:31:05.272546] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161efc0, cid 5, qid 0 00:20:09.862 [2024-10-07 11:31:05.272664] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:09.862 [2024-10-07 11:31:05.272679] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:09.862 [2024-10-07 11:31:05.272684] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272688] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15ba750): datao=0, datal=1024, cccid=4 00:20:09.862 [2024-10-07 11:31:05.272693] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x161ee40) on tqpair(0x15ba750): expected_datao=0, payload_size=1024 00:20:09.862 [2024-10-07 11:31:05.272705] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272712] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272717] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272723] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.862 [2024-10-07 11:31:05.272729] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.862 [2024-10-07 11:31:05.272739] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272743] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161efc0) on tqpair=0x15ba750 00:20:09.862 [2024-10-07 11:31:05.272763] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.862 [2024-10-07 11:31:05.272771] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.862 [2024-10-07 11:31:05.272775] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272779] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161ee40) on tqpair=0x15ba750 00:20:09.862 [2024-10-07 11:31:05.272793] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272798] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15ba750) 00:20:09.862 [2024-10-07 11:31:05.272805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.862 [2024-10-07 11:31:05.272830] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161ee40, cid 4, qid 0 00:20:09.862 [2024-10-07 11:31:05.272918] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:09.862 [2024-10-07 11:31:05.272925] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:09.862 [2024-10-07 11:31:05.272929] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272933] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15ba750): datao=0, datal=3072, cccid=4 00:20:09.862 [2024-10-07 11:31:05.272938] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x161ee40) on tqpair(0x15ba750): expected_datao=0, payload_size=3072 00:20:09.862 [2024-10-07 11:31:05.272942] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272950] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272954] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272963] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.862 [2024-10-07 11:31:05.272969] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.862 [2024-10-07 11:31:05.272973] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272977] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161ee40) on tqpair=0x15ba750 00:20:09.862 [2024-10-07 11:31:05.272988] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.272992] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15ba750) 00:20:09.862 [2024-10-07 11:31:05.273000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.862 [2024-10-07 11:31:05.273024] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161ee40, cid 4, qid 0 00:20:09.862 [2024-10-07 11:31:05.273113] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:09.862 [2024-10-07 11:31:05.273124] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:09.862 [2024-10-07 11:31:05.273129] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.273133] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15ba750): datao=0, datal=8, cccid=4 00:20:09.862 [2024-10-07 11:31:05.273138] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x161ee40) on tqpair(0x15ba750): expected_datao=0, payload_size=8 00:20:09.862 [2024-10-07 11:31:05.273142] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.273150] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:09.862 [2024-10-07 11:31:05.273154] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:09.862 ===================================================== 00:20:09.862 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:09.862 ===================================================== 00:20:09.862 Controller Capabilities/Features 00:20:09.862 ================================ 00:20:09.862 Vendor ID: 0000 00:20:09.862 Subsystem Vendor ID: 0000 00:20:09.862 Serial Number: .................... 00:20:09.862 Model Number: ........................................ 00:20:09.862 Firmware Version: 25.01 00:20:09.862 Recommended Arb Burst: 0 00:20:09.862 IEEE OUI Identifier: 00 00 00 00:20:09.862 Multi-path I/O 00:20:09.862 May have multiple subsystem ports: No 00:20:09.862 May have multiple controllers: No 00:20:09.862 Associated with SR-IOV VF: No 00:20:09.862 Max Data Transfer Size: 131072 00:20:09.862 Max Number of Namespaces: 0 00:20:09.862 Max Number of I/O Queues: 1024 00:20:09.862 NVMe Specification Version (VS): 1.3 00:20:09.862 NVMe Specification Version (Identify): 1.3 00:20:09.862 Maximum Queue Entries: 128 00:20:09.862 Contiguous Queues Required: Yes 00:20:09.862 Arbitration Mechanisms Supported 00:20:09.862 Weighted Round Robin: Not Supported 00:20:09.862 Vendor Specific: Not Supported 00:20:09.862 Reset Timeout: 15000 ms 00:20:09.862 Doorbell Stride: 4 bytes 00:20:09.862 NVM Subsystem Reset: Not Supported 00:20:09.862 Command Sets Supported 00:20:09.862 NVM Command Set: Supported 00:20:09.862 Boot Partition: Not Supported 00:20:09.863 Memory Page Size Minimum: 4096 bytes 00:20:09.863 Memory Page Size Maximum: 4096 bytes 00:20:09.863 Persistent Memory Region: Not Supported 00:20:09.863 Optional Asynchronous Events Supported 00:20:09.863 Namespace Attribute Notices: Not Supported 00:20:09.863 Firmware Activation Notices: Not Supported 00:20:09.863 ANA Change Notices: Not Supported 00:20:09.863 PLE Aggregate Log Change Notices: Not Supported 00:20:09.863 LBA Status Info Alert Notices: Not Supported 00:20:09.863 EGE Aggregate Log Change Notices: Not Supported 00:20:09.863 Normal NVM Subsystem Shutdown event: Not Supported 00:20:09.863 Zone Descriptor Change Notices: Not Supported 00:20:09.863 Discovery Log Change Notices: Supported 00:20:09.863 Controller Attributes 00:20:09.863 128-bit Host Identifier: Not Supported 00:20:09.863 Non-Operational Permissive Mode: Not Supported 00:20:09.863 NVM Sets: Not Supported 00:20:09.863 Read Recovery Levels: Not Supported 00:20:09.863 Endurance Groups: Not Supported 00:20:09.863 Predictable Latency Mode: Not Supported 00:20:09.863 Traffic Based Keep ALive: Not Supported 00:20:09.863 Namespace Granularity: Not Supported 00:20:09.863 SQ Associations: Not Supported 00:20:09.863 UUID List: Not Supported 00:20:09.863 Multi-Domain Subsystem: Not Supported 00:20:09.863 Fixed Capacity Management: Not Supported 00:20:09.863 Variable Capacity Management: Not Supported 00:20:09.863 Delete Endurance Group: Not Supported 00:20:09.863 Delete NVM Set: Not Supported 00:20:09.863 Extended LBA Formats Supported: Not Supported 00:20:09.863 Flexible Data Placement Supported: Not Supported 00:20:09.863 00:20:09.863 Controller Memory Buffer Support 00:20:09.863 ================================ 00:20:09.863 Supported: No 00:20:09.863 00:20:09.863 Persistent Memory Region Support 00:20:09.863 ================================ 00:20:09.863 Supported: No 00:20:09.863 00:20:09.863 Admin Command Set Attributes 00:20:09.863 ============================ 00:20:09.863 Security Send/Receive: Not Supported 00:20:09.863 Format NVM: Not Supported 00:20:09.863 Firmware Activate/Download: Not Supported 00:20:09.863 Namespace Management: Not Supported 00:20:09.863 Device Self-Test: Not Supported 00:20:09.863 Directives: Not Supported 00:20:09.863 NVMe-MI: Not Supported 00:20:09.863 Virtualization Management: Not Supported 00:20:09.863 Doorbell Buffer Config: Not Supported 00:20:09.863 Get LBA Status Capability: Not Supported 00:20:09.863 Command & Feature Lockdown Capability: Not Supported 00:20:09.863 Abort Command Limit: 1 00:20:09.863 Async Event Request Limit: 4 00:20:09.863 Number of Firmware Slots: N/A 00:20:09.863 Firmware Slot 1 Read-Only: N/A 00:20:09.863 Firm[2024-10-07 11:31:05.273170] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.863 [2024-10-07 11:31:05.273178] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.863 [2024-10-07 11:31:05.273182] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.863 [2024-10-07 11:31:05.273186] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161ee40) on tqpair=0x15ba750 00:20:09.863 ware Activation Without Reset: N/A 00:20:09.863 Multiple Update Detection Support: N/A 00:20:09.863 Firmware Update Granularity: No Information Provided 00:20:09.863 Per-Namespace SMART Log: No 00:20:09.863 Asymmetric Namespace Access Log Page: Not Supported 00:20:09.863 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:09.863 Command Effects Log Page: Not Supported 00:20:09.863 Get Log Page Extended Data: Supported 00:20:09.863 Telemetry Log Pages: Not Supported 00:20:09.863 Persistent Event Log Pages: Not Supported 00:20:09.863 Supported Log Pages Log Page: May Support 00:20:09.863 Commands Supported & Effects Log Page: Not Supported 00:20:09.863 Feature Identifiers & Effects Log Page:May Support 00:20:09.863 NVMe-MI Commands & Effects Log Page: May Support 00:20:09.863 Data Area 4 for Telemetry Log: Not Supported 00:20:09.863 Error Log Page Entries Supported: 128 00:20:09.863 Keep Alive: Not Supported 00:20:09.863 00:20:09.863 NVM Command Set Attributes 00:20:09.863 ========================== 00:20:09.863 Submission Queue Entry Size 00:20:09.863 Max: 1 00:20:09.863 Min: 1 00:20:09.863 Completion Queue Entry Size 00:20:09.863 Max: 1 00:20:09.863 Min: 1 00:20:09.863 Number of Namespaces: 0 00:20:09.863 Compare Command: Not Supported 00:20:09.863 Write Uncorrectable Command: Not Supported 00:20:09.863 Dataset Management Command: Not Supported 00:20:09.863 Write Zeroes Command: Not Supported 00:20:09.863 Set Features Save Field: Not Supported 00:20:09.863 Reservations: Not Supported 00:20:09.863 Timestamp: Not Supported 00:20:09.863 Copy: Not Supported 00:20:09.863 Volatile Write Cache: Not Present 00:20:09.863 Atomic Write Unit (Normal): 1 00:20:09.863 Atomic Write Unit (PFail): 1 00:20:09.863 Atomic Compare & Write Unit: 1 00:20:09.863 Fused Compare & Write: Supported 00:20:09.863 Scatter-Gather List 00:20:09.863 SGL Command Set: Supported 00:20:09.863 SGL Keyed: Supported 00:20:09.863 SGL Bit Bucket Descriptor: Not Supported 00:20:09.863 SGL Metadata Pointer: Not Supported 00:20:09.863 Oversized SGL: Not Supported 00:20:09.863 SGL Metadata Address: Not Supported 00:20:09.863 SGL Offset: Supported 00:20:09.863 Transport SGL Data Block: Not Supported 00:20:09.863 Replay Protected Memory Block: Not Supported 00:20:09.863 00:20:09.863 Firmware Slot Information 00:20:09.863 ========================= 00:20:09.863 Active slot: 0 00:20:09.863 00:20:09.863 00:20:09.863 Error Log 00:20:09.863 ========= 00:20:09.863 00:20:09.863 Active Namespaces 00:20:09.863 ================= 00:20:09.863 Discovery Log Page 00:20:09.863 ================== 00:20:09.863 Generation Counter: 2 00:20:09.863 Number of Records: 2 00:20:09.863 Record Format: 0 00:20:09.863 00:20:09.863 Discovery Log Entry 0 00:20:09.863 ---------------------- 00:20:09.863 Transport Type: 3 (TCP) 00:20:09.863 Address Family: 1 (IPv4) 00:20:09.863 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:09.863 Entry Flags: 00:20:09.863 Duplicate Returned Information: 1 00:20:09.863 Explicit Persistent Connection Support for Discovery: 1 00:20:09.863 Transport Requirements: 00:20:09.863 Secure Channel: Not Required 00:20:09.863 Port ID: 0 (0x0000) 00:20:09.863 Controller ID: 65535 (0xffff) 00:20:09.863 Admin Max SQ Size: 128 00:20:09.863 Transport Service Identifier: 4420 00:20:09.863 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:09.863 Transport Address: 10.0.0.3 00:20:09.863 Discovery Log Entry 1 00:20:09.863 ---------------------- 00:20:09.863 Transport Type: 3 (TCP) 00:20:09.863 Address Family: 1 (IPv4) 00:20:09.863 Subsystem Type: 2 (NVM Subsystem) 00:20:09.863 Entry Flags: 00:20:09.863 Duplicate Returned Information: 0 00:20:09.863 Explicit Persistent Connection Support for Discovery: 0 00:20:09.863 Transport Requirements: 00:20:09.863 Secure Channel: Not Required 00:20:09.863 Port ID: 0 (0x0000) 00:20:09.863 Controller ID: 65535 (0xffff) 00:20:09.863 Admin Max SQ Size: 128 00:20:09.863 Transport Service Identifier: 4420 00:20:09.863 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:09.863 Transport Address: 10.0.0.3 [2024-10-07 11:31:05.273284] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:09.863 [2024-10-07 11:31:05.273298] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161e840) on tqpair=0x15ba750 00:20:09.863 [2024-10-07 11:31:05.273305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.863 [2024-10-07 11:31:05.273311] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161e9c0) on tqpair=0x15ba750 00:20:09.863 [2024-10-07 11:31:05.273333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.863 [2024-10-07 11:31:05.273340] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161eb40) on tqpair=0x15ba750 00:20:09.863 [2024-10-07 11:31:05.273345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.863 [2024-10-07 11:31:05.273351] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161ecc0) on tqpair=0x15ba750 00:20:09.863 [2024-10-07 11:31:05.273356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.863 [2024-10-07 11:31:05.273365] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.863 [2024-10-07 11:31:05.273370] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.863 [2024-10-07 11:31:05.273374] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ba750) 00:20:09.863 [2024-10-07 11:31:05.273382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.864 [2024-10-07 11:31:05.273407] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161ecc0, cid 3, qid 0 00:20:09.864 [2024-10-07 11:31:05.273474] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.864 [2024-10-07 11:31:05.273481] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.864 [2024-10-07 11:31:05.273485] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.273489] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161ecc0) on tqpair=0x15ba750 00:20:09.864 [2024-10-07 11:31:05.273498] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.273502] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.273506] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ba750) 00:20:09.864 [2024-10-07 11:31:05.273514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.864 [2024-10-07 11:31:05.273536] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161ecc0, cid 3, qid 0 00:20:09.864 [2024-10-07 11:31:05.273632] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.864 [2024-10-07 11:31:05.273639] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.864 [2024-10-07 11:31:05.273643] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.273647] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161ecc0) on tqpair=0x15ba750 00:20:09.864 [2024-10-07 11:31:05.273653] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:09.864 [2024-10-07 11:31:05.273657] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:09.864 [2024-10-07 11:31:05.273668] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.273672] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.273676] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ba750) 00:20:09.864 [2024-10-07 11:31:05.273684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.864 [2024-10-07 11:31:05.273701] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161ecc0, cid 3, qid 0 00:20:09.864 [2024-10-07 11:31:05.273762] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.864 [2024-10-07 11:31:05.273769] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.864 [2024-10-07 11:31:05.273773] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.273777] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161ecc0) on tqpair=0x15ba750 00:20:09.864 [2024-10-07 11:31:05.273788] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.273793] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.273797] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ba750) 00:20:09.864 [2024-10-07 11:31:05.273804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.864 [2024-10-07 11:31:05.273822] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161ecc0, cid 3, qid 0 00:20:09.864 [2024-10-07 11:31:05.273893] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.864 [2024-10-07 11:31:05.273905] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.864 [2024-10-07 11:31:05.273909] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.273914] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161ecc0) on tqpair=0x15ba750 00:20:09.864 [2024-10-07 11:31:05.273925] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.273930] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.273934] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ba750) 00:20:09.864 [2024-10-07 11:31:05.273941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.864 [2024-10-07 11:31:05.273959] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161ecc0, cid 3, qid 0 00:20:09.864 [2024-10-07 11:31:05.274033] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.864 [2024-10-07 11:31:05.274044] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.864 [2024-10-07 11:31:05.274048] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.274053] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161ecc0) on tqpair=0x15ba750 00:20:09.864 [2024-10-07 11:31:05.274063] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.274068] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.274072] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ba750) 00:20:09.864 [2024-10-07 11:31:05.274080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.864 [2024-10-07 11:31:05.274098] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161ecc0, cid 3, qid 0 00:20:09.864 [2024-10-07 11:31:05.274163] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.864 [2024-10-07 11:31:05.274169] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.864 [2024-10-07 11:31:05.274173] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.274178] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161ecc0) on tqpair=0x15ba750 00:20:09.864 [2024-10-07 11:31:05.274188] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.274193] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.274197] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ba750) 00:20:09.864 [2024-10-07 11:31:05.274204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.864 [2024-10-07 11:31:05.274222] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161ecc0, cid 3, qid 0 00:20:09.864 [2024-10-07 11:31:05.274278] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.864 [2024-10-07 11:31:05.274295] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.864 [2024-10-07 11:31:05.274300] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.274304] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161ecc0) on tqpair=0x15ba750 00:20:09.864 [2024-10-07 11:31:05.274315] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.278342] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.278350] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ba750) 00:20:09.864 [2024-10-07 11:31:05.278360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.864 [2024-10-07 11:31:05.278390] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161ecc0, cid 3, qid 0 00:20:09.864 [2024-10-07 11:31:05.278450] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:09.864 [2024-10-07 11:31:05.278458] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:09.864 [2024-10-07 11:31:05.278462] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:09.864 [2024-10-07 11:31:05.278466] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161ecc0) on tqpair=0x15ba750 00:20:09.864 [2024-10-07 11:31:05.278476] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:20:09.864 00:20:09.864 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:09.864 [2024-10-07 11:31:05.331850] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:20:09.864 [2024-10-07 11:31:05.331943] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74343 ] 00:20:10.128 [2024-10-07 11:31:05.481509] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:10.128 [2024-10-07 11:31:05.481590] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:10.128 [2024-10-07 11:31:05.481597] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:10.128 [2024-10-07 11:31:05.481610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:10.128 [2024-10-07 11:31:05.481622] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:10.128 [2024-10-07 11:31:05.481968] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:10.128 [2024-10-07 11:31:05.482040] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9d9750 0 00:20:10.128 [2024-10-07 11:31:05.494344] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:10.128 [2024-10-07 11:31:05.494371] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:10.128 [2024-10-07 11:31:05.494378] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:10.128 [2024-10-07 11:31:05.494382] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:10.128 [2024-10-07 11:31:05.494425] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.128 [2024-10-07 11:31:05.494433] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.128 [2024-10-07 11:31:05.494437] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9750) 00:20:10.128 [2024-10-07 11:31:05.494453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:10.128 [2024-10-07 11:31:05.494485] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d840, cid 0, qid 0 00:20:10.128 [2024-10-07 11:31:05.502336] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.128 [2024-10-07 11:31:05.502360] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.128 [2024-10-07 11:31:05.502365] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.128 [2024-10-07 11:31:05.502371] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d840) on tqpair=0x9d9750 00:20:10.128 [2024-10-07 11:31:05.502382] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:10.128 [2024-10-07 11:31:05.502391] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:10.128 [2024-10-07 11:31:05.502398] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:10.128 [2024-10-07 11:31:05.502415] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.128 [2024-10-07 11:31:05.502421] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.128 [2024-10-07 11:31:05.502425] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9750) 00:20:10.128 [2024-10-07 11:31:05.502436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.128 [2024-10-07 11:31:05.502464] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d840, cid 0, qid 0 00:20:10.128 [2024-10-07 11:31:05.502523] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.128 [2024-10-07 11:31:05.502530] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.128 [2024-10-07 11:31:05.502534] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.128 [2024-10-07 11:31:05.502538] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d840) on tqpair=0x9d9750 00:20:10.128 [2024-10-07 11:31:05.502545] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:10.128 [2024-10-07 11:31:05.502553] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:10.128 [2024-10-07 11:31:05.502561] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.128 [2024-10-07 11:31:05.502565] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.128 [2024-10-07 11:31:05.502575] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9750) 00:20:10.128 [2024-10-07 11:31:05.502584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.128 [2024-10-07 11:31:05.502602] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d840, cid 0, qid 0 00:20:10.128 [2024-10-07 11:31:05.502648] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.128 [2024-10-07 11:31:05.502655] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.128 [2024-10-07 11:31:05.502659] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.128 [2024-10-07 11:31:05.502663] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d840) on tqpair=0x9d9750 00:20:10.128 [2024-10-07 11:31:05.502669] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:10.128 [2024-10-07 11:31:05.502678] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:10.128 [2024-10-07 11:31:05.502686] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.128 [2024-10-07 11:31:05.502690] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.128 [2024-10-07 11:31:05.502694] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9750) 00:20:10.128 [2024-10-07 11:31:05.502702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.128 [2024-10-07 11:31:05.502719] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d840, cid 0, qid 0 00:20:10.128 [2024-10-07 11:31:05.502773] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.128 [2024-10-07 11:31:05.502780] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.128 [2024-10-07 11:31:05.502784] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.128 [2024-10-07 11:31:05.502788] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d840) on tqpair=0x9d9750 00:20:10.128 [2024-10-07 11:31:05.502794] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:10.128 [2024-10-07 11:31:05.502805] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.128 [2024-10-07 11:31:05.502810] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.128 [2024-10-07 11:31:05.502814] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9750) 00:20:10.128 [2024-10-07 11:31:05.502821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.128 [2024-10-07 11:31:05.502838] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d840, cid 0, qid 0 00:20:10.128 [2024-10-07 11:31:05.502886] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.128 [2024-10-07 11:31:05.502893] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.128 [2024-10-07 11:31:05.502896] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.128 [2024-10-07 11:31:05.502901] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d840) on tqpair=0x9d9750 00:20:10.128 [2024-10-07 11:31:05.502906] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:10.128 [2024-10-07 11:31:05.502911] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:10.129 [2024-10-07 11:31:05.502919] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:10.129 [2024-10-07 11:31:05.503026] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:10.129 [2024-10-07 11:31:05.503040] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:10.129 [2024-10-07 11:31:05.503050] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503055] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503059] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9750) 00:20:10.129 [2024-10-07 11:31:05.503067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.129 [2024-10-07 11:31:05.503087] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d840, cid 0, qid 0 00:20:10.129 [2024-10-07 11:31:05.503144] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.129 [2024-10-07 11:31:05.503151] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.129 [2024-10-07 11:31:05.503155] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503159] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d840) on tqpair=0x9d9750 00:20:10.129 [2024-10-07 11:31:05.503164] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:10.129 [2024-10-07 11:31:05.503175] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503179] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503183] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9750) 00:20:10.129 [2024-10-07 11:31:05.503191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.129 [2024-10-07 11:31:05.503208] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d840, cid 0, qid 0 00:20:10.129 [2024-10-07 11:31:05.503253] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.129 [2024-10-07 11:31:05.503260] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.129 [2024-10-07 11:31:05.503263] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503268] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d840) on tqpair=0x9d9750 00:20:10.129 [2024-10-07 11:31:05.503273] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:10.129 [2024-10-07 11:31:05.503278] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:10.129 [2024-10-07 11:31:05.503286] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:10.129 [2024-10-07 11:31:05.503303] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:10.129 [2024-10-07 11:31:05.503314] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503333] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9750) 00:20:10.129 [2024-10-07 11:31:05.503342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.129 [2024-10-07 11:31:05.503363] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d840, cid 0, qid 0 00:20:10.129 [2024-10-07 11:31:05.503466] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.129 [2024-10-07 11:31:05.503474] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.129 [2024-10-07 11:31:05.503478] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503482] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d9750): datao=0, datal=4096, cccid=0 00:20:10.129 [2024-10-07 11:31:05.503487] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa3d840) on tqpair(0x9d9750): expected_datao=0, payload_size=4096 00:20:10.129 [2024-10-07 11:31:05.503492] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503501] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503505] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503517] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.129 [2024-10-07 11:31:05.503524] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.129 [2024-10-07 11:31:05.503527] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503532] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d840) on tqpair=0x9d9750 00:20:10.129 [2024-10-07 11:31:05.503541] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:10.129 [2024-10-07 11:31:05.503547] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:10.129 [2024-10-07 11:31:05.503551] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:10.129 [2024-10-07 11:31:05.503556] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:10.129 [2024-10-07 11:31:05.503561] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:10.129 [2024-10-07 11:31:05.503567] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:10.129 [2024-10-07 11:31:05.503576] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:10.129 [2024-10-07 11:31:05.503589] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503594] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503598] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9750) 00:20:10.129 [2024-10-07 11:31:05.503606] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:10.129 [2024-10-07 11:31:05.503626] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d840, cid 0, qid 0 00:20:10.129 [2024-10-07 11:31:05.503683] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.129 [2024-10-07 11:31:05.503691] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.129 [2024-10-07 11:31:05.503694] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503699] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d840) on tqpair=0x9d9750 00:20:10.129 [2024-10-07 11:31:05.503708] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503712] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503716] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9750) 00:20:10.129 [2024-10-07 11:31:05.503723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.129 [2024-10-07 11:31:05.503730] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503734] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503738] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9d9750) 00:20:10.129 [2024-10-07 11:31:05.503744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.129 [2024-10-07 11:31:05.503751] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503755] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503759] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9d9750) 00:20:10.129 [2024-10-07 11:31:05.503765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.129 [2024-10-07 11:31:05.503771] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503776] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503779] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.129 [2024-10-07 11:31:05.503786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.129 [2024-10-07 11:31:05.503791] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:10.129 [2024-10-07 11:31:05.503804] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:10.129 [2024-10-07 11:31:05.503813] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503817] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d9750) 00:20:10.129 [2024-10-07 11:31:05.503824] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.129 [2024-10-07 11:31:05.503844] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d840, cid 0, qid 0 00:20:10.129 [2024-10-07 11:31:05.503852] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d9c0, cid 1, qid 0 00:20:10.129 [2024-10-07 11:31:05.503857] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3db40, cid 2, qid 0 00:20:10.129 [2024-10-07 11:31:05.503862] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.129 [2024-10-07 11:31:05.503867] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3de40, cid 4, qid 0 00:20:10.129 [2024-10-07 11:31:05.503961] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.129 [2024-10-07 11:31:05.503977] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.129 [2024-10-07 11:31:05.503982] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.503986] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3de40) on tqpair=0x9d9750 00:20:10.129 [2024-10-07 11:31:05.503992] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:10.129 [2024-10-07 11:31:05.503998] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:10.129 [2024-10-07 11:31:05.504011] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:10.129 [2024-10-07 11:31:05.504019] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:10.129 [2024-10-07 11:31:05.504027] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.504031] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.504035] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d9750) 00:20:10.129 [2024-10-07 11:31:05.504043] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:10.129 [2024-10-07 11:31:05.504062] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3de40, cid 4, qid 0 00:20:10.129 [2024-10-07 11:31:05.504115] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.129 [2024-10-07 11:31:05.504122] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.129 [2024-10-07 11:31:05.504125] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.129 [2024-10-07 11:31:05.504130] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3de40) on tqpair=0x9d9750 00:20:10.130 [2024-10-07 11:31:05.504196] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:10.130 [2024-10-07 11:31:05.504208] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:10.130 [2024-10-07 11:31:05.504217] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504221] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d9750) 00:20:10.130 [2024-10-07 11:31:05.504229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.130 [2024-10-07 11:31:05.504248] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3de40, cid 4, qid 0 00:20:10.130 [2024-10-07 11:31:05.504309] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.130 [2024-10-07 11:31:05.504328] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.130 [2024-10-07 11:31:05.504334] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504338] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d9750): datao=0, datal=4096, cccid=4 00:20:10.130 [2024-10-07 11:31:05.504343] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa3de40) on tqpair(0x9d9750): expected_datao=0, payload_size=4096 00:20:10.130 [2024-10-07 11:31:05.504348] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504356] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504360] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504369] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.130 [2024-10-07 11:31:05.504376] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.130 [2024-10-07 11:31:05.504379] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504383] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3de40) on tqpair=0x9d9750 00:20:10.130 [2024-10-07 11:31:05.504403] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:10.130 [2024-10-07 11:31:05.504415] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:10.130 [2024-10-07 11:31:05.504426] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:10.130 [2024-10-07 11:31:05.504434] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504438] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d9750) 00:20:10.130 [2024-10-07 11:31:05.504446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.130 [2024-10-07 11:31:05.504468] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3de40, cid 4, qid 0 00:20:10.130 [2024-10-07 11:31:05.504551] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.130 [2024-10-07 11:31:05.504559] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.130 [2024-10-07 11:31:05.504563] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504567] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d9750): datao=0, datal=4096, cccid=4 00:20:10.130 [2024-10-07 11:31:05.504573] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa3de40) on tqpair(0x9d9750): expected_datao=0, payload_size=4096 00:20:10.130 [2024-10-07 11:31:05.504577] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504584] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504589] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504597] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.130 [2024-10-07 11:31:05.504603] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.130 [2024-10-07 11:31:05.504607] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504611] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3de40) on tqpair=0x9d9750 00:20:10.130 [2024-10-07 11:31:05.504623] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:10.130 [2024-10-07 11:31:05.504633] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:10.130 [2024-10-07 11:31:05.504642] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504646] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d9750) 00:20:10.130 [2024-10-07 11:31:05.504653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.130 [2024-10-07 11:31:05.504672] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3de40, cid 4, qid 0 00:20:10.130 [2024-10-07 11:31:05.504733] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.130 [2024-10-07 11:31:05.504740] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.130 [2024-10-07 11:31:05.504744] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504748] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d9750): datao=0, datal=4096, cccid=4 00:20:10.130 [2024-10-07 11:31:05.504753] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa3de40) on tqpair(0x9d9750): expected_datao=0, payload_size=4096 00:20:10.130 [2024-10-07 11:31:05.504757] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504764] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504768] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504777] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.130 [2024-10-07 11:31:05.504783] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.130 [2024-10-07 11:31:05.504787] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504791] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3de40) on tqpair=0x9d9750 00:20:10.130 [2024-10-07 11:31:05.504804] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:10.130 [2024-10-07 11:31:05.504814] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:10.130 [2024-10-07 11:31:05.504823] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:10.130 [2024-10-07 11:31:05.504830] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:10.130 [2024-10-07 11:31:05.504836] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:10.130 [2024-10-07 11:31:05.504841] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:10.130 [2024-10-07 11:31:05.504847] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:10.130 [2024-10-07 11:31:05.504852] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:10.130 [2024-10-07 11:31:05.504857] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:10.130 [2024-10-07 11:31:05.504875] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504879] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d9750) 00:20:10.130 [2024-10-07 11:31:05.504887] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.130 [2024-10-07 11:31:05.504895] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504899] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.504903] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9d9750) 00:20:10.130 [2024-10-07 11:31:05.504909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.130 [2024-10-07 11:31:05.504937] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3de40, cid 4, qid 0 00:20:10.130 [2024-10-07 11:31:05.504944] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dfc0, cid 5, qid 0 00:20:10.130 [2024-10-07 11:31:05.505011] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.130 [2024-10-07 11:31:05.505018] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.130 [2024-10-07 11:31:05.505022] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.505026] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3de40) on tqpair=0x9d9750 00:20:10.130 [2024-10-07 11:31:05.505033] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.130 [2024-10-07 11:31:05.505039] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.130 [2024-10-07 11:31:05.505042] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.505046] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dfc0) on tqpair=0x9d9750 00:20:10.130 [2024-10-07 11:31:05.505057] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.505061] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9d9750) 00:20:10.130 [2024-10-07 11:31:05.505069] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.130 [2024-10-07 11:31:05.505086] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dfc0, cid 5, qid 0 00:20:10.130 [2024-10-07 11:31:05.505136] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.130 [2024-10-07 11:31:05.505143] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.130 [2024-10-07 11:31:05.505147] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.505151] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dfc0) on tqpair=0x9d9750 00:20:10.130 [2024-10-07 11:31:05.505161] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.505166] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9d9750) 00:20:10.130 [2024-10-07 11:31:05.505173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.130 [2024-10-07 11:31:05.505189] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dfc0, cid 5, qid 0 00:20:10.130 [2024-10-07 11:31:05.505238] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.130 [2024-10-07 11:31:05.505250] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.130 [2024-10-07 11:31:05.505255] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.505259] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dfc0) on tqpair=0x9d9750 00:20:10.130 [2024-10-07 11:31:05.505270] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.130 [2024-10-07 11:31:05.505275] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9d9750) 00:20:10.130 [2024-10-07 11:31:05.505282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.131 [2024-10-07 11:31:05.505300] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dfc0, cid 5, qid 0 00:20:10.131 [2024-10-07 11:31:05.505376] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.131 [2024-10-07 11:31:05.505384] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.131 [2024-10-07 11:31:05.505388] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505393] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dfc0) on tqpair=0x9d9750 00:20:10.131 [2024-10-07 11:31:05.505412] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505418] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9d9750) 00:20:10.131 [2024-10-07 11:31:05.505426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.131 [2024-10-07 11:31:05.505434] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505438] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d9750) 00:20:10.131 [2024-10-07 11:31:05.505445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.131 [2024-10-07 11:31:05.505452] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505457] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x9d9750) 00:20:10.131 [2024-10-07 11:31:05.505463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.131 [2024-10-07 11:31:05.505472] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505476] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9d9750) 00:20:10.131 [2024-10-07 11:31:05.505483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.131 [2024-10-07 11:31:05.505505] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dfc0, cid 5, qid 0 00:20:10.131 [2024-10-07 11:31:05.505512] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3de40, cid 4, qid 0 00:20:10.131 [2024-10-07 11:31:05.505518] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3e140, cid 6, qid 0 00:20:10.131 [2024-10-07 11:31:05.505526] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3e2c0, cid 7, qid 0 00:20:10.131 [2024-10-07 11:31:05.505674] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.131 [2024-10-07 11:31:05.505681] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.131 [2024-10-07 11:31:05.505685] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505689] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d9750): datao=0, datal=8192, cccid=5 00:20:10.131 [2024-10-07 11:31:05.505694] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa3dfc0) on tqpair(0x9d9750): expected_datao=0, payload_size=8192 00:20:10.131 [2024-10-07 11:31:05.505698] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505715] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505720] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505727] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.131 [2024-10-07 11:31:05.505732] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.131 [2024-10-07 11:31:05.505736] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505740] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d9750): datao=0, datal=512, cccid=4 00:20:10.131 [2024-10-07 11:31:05.505744] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa3de40) on tqpair(0x9d9750): expected_datao=0, payload_size=512 00:20:10.131 [2024-10-07 11:31:05.505749] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505755] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505759] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505765] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.131 [2024-10-07 11:31:05.505771] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.131 [2024-10-07 11:31:05.505774] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505778] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d9750): datao=0, datal=512, cccid=6 00:20:10.131 [2024-10-07 11:31:05.505783] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa3e140) on tqpair(0x9d9750): expected_datao=0, payload_size=512 00:20:10.131 [2024-10-07 11:31:05.505787] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505794] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505797] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505803] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.131 [2024-10-07 11:31:05.505809] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.131 [2024-10-07 11:31:05.505813] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505816] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d9750): datao=0, datal=4096, cccid=7 00:20:10.131 [2024-10-07 11:31:05.505823] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa3e2c0) on tqpair(0x9d9750): expected_datao=0, payload_size=4096 00:20:10.131 [2024-10-07 11:31:05.505827] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505834] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505838] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505846] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.131 [2024-10-07 11:31:05.505852] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.131 [2024-10-07 11:31:05.505856] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505860] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dfc0) on tqpair=0x9d9750 00:20:10.131 [2024-10-07 11:31:05.505876] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.131 [2024-10-07 11:31:05.505883] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.131 [2024-10-07 11:31:05.505887] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505891] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3de40) on tqpair=0x9d9750 00:20:10.131 [2024-10-07 11:31:05.505904] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.131 [2024-10-07 11:31:05.505910] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.131 [2024-10-07 11:31:05.505914] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.131 [2024-10-07 11:31:05.505918] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3e140) on tqpair=0x9d9750 00:20:10.131 ===================================================== 00:20:10.131 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.131 ===================================================== 00:20:10.131 Controller Capabilities/Features 00:20:10.131 ================================ 00:20:10.131 Vendor ID: 8086 00:20:10.131 Subsystem Vendor ID: 8086 00:20:10.131 Serial Number: SPDK00000000000001 00:20:10.131 Model Number: SPDK bdev Controller 00:20:10.131 Firmware Version: 25.01 00:20:10.131 Recommended Arb Burst: 6 00:20:10.131 IEEE OUI Identifier: e4 d2 5c 00:20:10.131 Multi-path I/O 00:20:10.131 May have multiple subsystem ports: Yes 00:20:10.131 May have multiple controllers: Yes 00:20:10.131 Associated with SR-IOV VF: No 00:20:10.131 Max Data Transfer Size: 131072 00:20:10.131 Max Number of Namespaces: 32 00:20:10.131 Max Number of I/O Queues: 127 00:20:10.131 NVMe Specification Version (VS): 1.3 00:20:10.131 NVMe Specification Version (Identify): 1.3 00:20:10.131 Maximum Queue Entries: 128 00:20:10.131 Contiguous Queues Required: Yes 00:20:10.131 Arbitration Mechanisms Supported 00:20:10.131 Weighted Round Robin: Not Supported 00:20:10.131 Vendor Specific: Not Supported 00:20:10.131 Reset Timeout: 15000 ms 00:20:10.131 Doorbell Stride: 4 bytes 00:20:10.131 NVM Subsystem Reset: Not Supported 00:20:10.131 Command Sets Supported 00:20:10.131 NVM Command Set: Supported 00:20:10.131 Boot Partition: Not Supported 00:20:10.131 Memory Page Size Minimum: 4096 bytes 00:20:10.131 Memory Page Size Maximum: 4096 bytes 00:20:10.131 Persistent Memory Region: Not Supported 00:20:10.131 Optional Asynchronous Events Supported 00:20:10.131 Namespace Attribute Notices: Supported 00:20:10.131 Firmware Activation Notices: Not Supported 00:20:10.131 ANA Change Notices: Not Supported 00:20:10.131 PLE Aggregate Log Change Notices: Not Supported 00:20:10.131 LBA Status Info Alert Notices: Not Supported 00:20:10.131 EGE Aggregate Log Change Notices: Not Supported 00:20:10.131 Normal NVM Subsystem Shutdown event: Not Supported 00:20:10.131 Zone Descriptor Change Notices: Not Supported 00:20:10.131 Discovery Log Change Notices: Not Supported 00:20:10.131 Controller Attributes 00:20:10.131 128-bit Host Identifier: Supported 00:20:10.131 Non-Operational Permissive Mode: Not Supported 00:20:10.131 NVM Sets: Not Supported 00:20:10.131 Read Recovery Levels: Not Supported 00:20:10.131 Endurance Groups: Not Supported 00:20:10.131 Predictable Latency Mode: Not Supported 00:20:10.131 Traffic Based Keep ALive: Not Supported 00:20:10.131 Namespace Granularity: Not Supported 00:20:10.131 SQ Associations: Not Supported 00:20:10.131 UUID List: Not Supported 00:20:10.131 Multi-Domain Subsystem: Not Supported 00:20:10.131 Fixed Capacity Management: Not Supported 00:20:10.131 Variable Capacity Management: Not Supported 00:20:10.131 Delete Endurance Group: Not Supported 00:20:10.131 Delete NVM Set: Not Supported 00:20:10.131 Extended LBA Formats Supported: Not Supported 00:20:10.131 Flexible Data Placement Supported: Not Supported 00:20:10.131 00:20:10.131 Controller Memory Buffer Support 00:20:10.131 ================================ 00:20:10.131 Supported: No 00:20:10.131 00:20:10.131 Persistent Memory Region Support 00:20:10.131 ================================ 00:20:10.131 Supported: No 00:20:10.131 00:20:10.131 Admin Command Set Attributes 00:20:10.131 ============================ 00:20:10.131 Security Send/Receive: Not Supported 00:20:10.131 Format NVM: Not Supported 00:20:10.131 Firmware Activate/Download: Not Supported 00:20:10.131 Namespace Management: Not Supported 00:20:10.131 Device Self-Test: Not Supported 00:20:10.131 Directives: Not Supported 00:20:10.131 NVMe-MI: Not Supported 00:20:10.131 Virtualization Management: Not Supported 00:20:10.132 Doorbell Buffer Config: Not Supported 00:20:10.132 Get LBA Status Capability: Not Supported 00:20:10.132 Command & Feature Lockdown Capability: Not Supported 00:20:10.132 Abort Command Limit: 4 00:20:10.132 Async Event Request Limit: 4 00:20:10.132 Number of Firmware Slots: N/A 00:20:10.132 Firmware Slot 1 Read-Only: N/A 00:20:10.132 Firmware Activation Without Reset: [2024-10-07 11:31:05.505925] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.132 [2024-10-07 11:31:05.505931] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.132 [2024-10-07 11:31:05.505935] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.132 [2024-10-07 11:31:05.505939] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3e2c0) on tqpair=0x9d9750 00:20:10.132 N/A 00:20:10.132 Multiple Update Detection Support: N/A 00:20:10.132 Firmware Update Granularity: No Information Provided 00:20:10.132 Per-Namespace SMART Log: No 00:20:10.132 Asymmetric Namespace Access Log Page: Not Supported 00:20:10.132 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:10.132 Command Effects Log Page: Supported 00:20:10.132 Get Log Page Extended Data: Supported 00:20:10.132 Telemetry Log Pages: Not Supported 00:20:10.132 Persistent Event Log Pages: Not Supported 00:20:10.132 Supported Log Pages Log Page: May Support 00:20:10.132 Commands Supported & Effects Log Page: Not Supported 00:20:10.132 Feature Identifiers & Effects Log Page:May Support 00:20:10.132 NVMe-MI Commands & Effects Log Page: May Support 00:20:10.132 Data Area 4 for Telemetry Log: Not Supported 00:20:10.132 Error Log Page Entries Supported: 128 00:20:10.132 Keep Alive: Supported 00:20:10.132 Keep Alive Granularity: 10000 ms 00:20:10.132 00:20:10.132 NVM Command Set Attributes 00:20:10.132 ========================== 00:20:10.132 Submission Queue Entry Size 00:20:10.132 Max: 64 00:20:10.132 Min: 64 00:20:10.132 Completion Queue Entry Size 00:20:10.132 Max: 16 00:20:10.132 Min: 16 00:20:10.132 Number of Namespaces: 32 00:20:10.132 Compare Command: Supported 00:20:10.132 Write Uncorrectable Command: Not Supported 00:20:10.132 Dataset Management Command: Supported 00:20:10.132 Write Zeroes Command: Supported 00:20:10.132 Set Features Save Field: Not Supported 00:20:10.132 Reservations: Supported 00:20:10.132 Timestamp: Not Supported 00:20:10.132 Copy: Supported 00:20:10.132 Volatile Write Cache: Present 00:20:10.132 Atomic Write Unit (Normal): 1 00:20:10.132 Atomic Write Unit (PFail): 1 00:20:10.132 Atomic Compare & Write Unit: 1 00:20:10.132 Fused Compare & Write: Supported 00:20:10.132 Scatter-Gather List 00:20:10.132 SGL Command Set: Supported 00:20:10.132 SGL Keyed: Supported 00:20:10.132 SGL Bit Bucket Descriptor: Not Supported 00:20:10.132 SGL Metadata Pointer: Not Supported 00:20:10.132 Oversized SGL: Not Supported 00:20:10.132 SGL Metadata Address: Not Supported 00:20:10.132 SGL Offset: Supported 00:20:10.132 Transport SGL Data Block: Not Supported 00:20:10.132 Replay Protected Memory Block: Not Supported 00:20:10.132 00:20:10.132 Firmware Slot Information 00:20:10.132 ========================= 00:20:10.132 Active slot: 1 00:20:10.132 Slot 1 Firmware Revision: 25.01 00:20:10.132 00:20:10.132 00:20:10.132 Commands Supported and Effects 00:20:10.132 ============================== 00:20:10.132 Admin Commands 00:20:10.132 -------------- 00:20:10.132 Get Log Page (02h): Supported 00:20:10.132 Identify (06h): Supported 00:20:10.132 Abort (08h): Supported 00:20:10.132 Set Features (09h): Supported 00:20:10.132 Get Features (0Ah): Supported 00:20:10.132 Asynchronous Event Request (0Ch): Supported 00:20:10.132 Keep Alive (18h): Supported 00:20:10.132 I/O Commands 00:20:10.132 ------------ 00:20:10.132 Flush (00h): Supported LBA-Change 00:20:10.132 Write (01h): Supported LBA-Change 00:20:10.132 Read (02h): Supported 00:20:10.132 Compare (05h): Supported 00:20:10.132 Write Zeroes (08h): Supported LBA-Change 00:20:10.132 Dataset Management (09h): Supported LBA-Change 00:20:10.132 Copy (19h): Supported LBA-Change 00:20:10.132 00:20:10.132 Error Log 00:20:10.132 ========= 00:20:10.132 00:20:10.132 Arbitration 00:20:10.132 =========== 00:20:10.132 Arbitration Burst: 1 00:20:10.132 00:20:10.132 Power Management 00:20:10.132 ================ 00:20:10.132 Number of Power States: 1 00:20:10.132 Current Power State: Power State #0 00:20:10.132 Power State #0: 00:20:10.132 Max Power: 0.00 W 00:20:10.132 Non-Operational State: Operational 00:20:10.132 Entry Latency: Not Reported 00:20:10.132 Exit Latency: Not Reported 00:20:10.132 Relative Read Throughput: 0 00:20:10.132 Relative Read Latency: 0 00:20:10.132 Relative Write Throughput: 0 00:20:10.132 Relative Write Latency: 0 00:20:10.132 Idle Power: Not Reported 00:20:10.132 Active Power: Not Reported 00:20:10.132 Non-Operational Permissive Mode: Not Supported 00:20:10.132 00:20:10.132 Health Information 00:20:10.132 ================== 00:20:10.132 Critical Warnings: 00:20:10.132 Available Spare Space: OK 00:20:10.132 Temperature: OK 00:20:10.132 Device Reliability: OK 00:20:10.132 Read Only: No 00:20:10.132 Volatile Memory Backup: OK 00:20:10.132 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:10.132 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:10.132 Available Spare: 0% 00:20:10.132 Available Spare Threshold: 0% 00:20:10.132 Life Percentage Used:[2024-10-07 11:31:05.506045] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.132 [2024-10-07 11:31:05.506052] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9d9750) 00:20:10.132 [2024-10-07 11:31:05.506060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.132 [2024-10-07 11:31:05.506082] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3e2c0, cid 7, qid 0 00:20:10.132 [2024-10-07 11:31:05.506133] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.132 [2024-10-07 11:31:05.506140] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.132 [2024-10-07 11:31:05.506144] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.132 [2024-10-07 11:31:05.506148] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3e2c0) on tqpair=0x9d9750 00:20:10.132 [2024-10-07 11:31:05.506188] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:10.132 [2024-10-07 11:31:05.506200] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d840) on tqpair=0x9d9750 00:20:10.132 [2024-10-07 11:31:05.506206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.132 [2024-10-07 11:31:05.506212] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d9c0) on tqpair=0x9d9750 00:20:10.132 [2024-10-07 11:31:05.506217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.132 [2024-10-07 11:31:05.506223] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3db40) on tqpair=0x9d9750 00:20:10.132 [2024-10-07 11:31:05.506228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.132 [2024-10-07 11:31:05.506233] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.132 [2024-10-07 11:31:05.506238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.132 [2024-10-07 11:31:05.506248] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.132 [2024-10-07 11:31:05.506252] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.132 [2024-10-07 11:31:05.506256] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.132 [2024-10-07 11:31:05.506264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.132 [2024-10-07 11:31:05.506302] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.132 [2024-10-07 11:31:05.510339] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.132 [2024-10-07 11:31:05.510359] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.132 [2024-10-07 11:31:05.510364] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.132 [2024-10-07 11:31:05.510369] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.132 [2024-10-07 11:31:05.510379] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.132 [2024-10-07 11:31:05.510384] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.510387] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.133 [2024-10-07 11:31:05.510396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.133 [2024-10-07 11:31:05.510425] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.133 [2024-10-07 11:31:05.510495] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.133 [2024-10-07 11:31:05.510502] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.133 [2024-10-07 11:31:05.510506] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.510510] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.133 [2024-10-07 11:31:05.510516] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:10.133 [2024-10-07 11:31:05.510521] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:10.133 [2024-10-07 11:31:05.510531] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.510536] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.510540] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.133 [2024-10-07 11:31:05.510548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.133 [2024-10-07 11:31:05.510565] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.133 [2024-10-07 11:31:05.510619] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.133 [2024-10-07 11:31:05.510626] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.133 [2024-10-07 11:31:05.510630] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.510634] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.133 [2024-10-07 11:31:05.510645] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.510650] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.510654] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.133 [2024-10-07 11:31:05.510661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.133 [2024-10-07 11:31:05.510678] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.133 [2024-10-07 11:31:05.510724] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.133 [2024-10-07 11:31:05.510731] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.133 [2024-10-07 11:31:05.510735] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.510739] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.133 [2024-10-07 11:31:05.510749] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.510754] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.510758] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.133 [2024-10-07 11:31:05.510765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.133 [2024-10-07 11:31:05.510781] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.133 [2024-10-07 11:31:05.510831] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.133 [2024-10-07 11:31:05.510837] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.133 [2024-10-07 11:31:05.510841] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.510845] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.133 [2024-10-07 11:31:05.510856] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.510861] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.510865] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.133 [2024-10-07 11:31:05.510872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.133 [2024-10-07 11:31:05.510888] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.133 [2024-10-07 11:31:05.510935] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.133 [2024-10-07 11:31:05.510942] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.133 [2024-10-07 11:31:05.510946] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.510950] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.133 [2024-10-07 11:31:05.510960] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.510965] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.510969] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.133 [2024-10-07 11:31:05.510976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.133 [2024-10-07 11:31:05.510993] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.133 [2024-10-07 11:31:05.511039] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.133 [2024-10-07 11:31:05.511052] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.133 [2024-10-07 11:31:05.511056] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511060] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.133 [2024-10-07 11:31:05.511071] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511076] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511080] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.133 [2024-10-07 11:31:05.511087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.133 [2024-10-07 11:31:05.511104] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.133 [2024-10-07 11:31:05.511150] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.133 [2024-10-07 11:31:05.511157] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.133 [2024-10-07 11:31:05.511161] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511165] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.133 [2024-10-07 11:31:05.511175] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511180] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511184] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.133 [2024-10-07 11:31:05.511192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.133 [2024-10-07 11:31:05.511208] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.133 [2024-10-07 11:31:05.511255] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.133 [2024-10-07 11:31:05.511261] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.133 [2024-10-07 11:31:05.511265] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511269] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.133 [2024-10-07 11:31:05.511279] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511284] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511288] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.133 [2024-10-07 11:31:05.511295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.133 [2024-10-07 11:31:05.511311] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.133 [2024-10-07 11:31:05.511374] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.133 [2024-10-07 11:31:05.511382] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.133 [2024-10-07 11:31:05.511386] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511390] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.133 [2024-10-07 11:31:05.511401] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511405] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511409] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.133 [2024-10-07 11:31:05.511417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.133 [2024-10-07 11:31:05.511436] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.133 [2024-10-07 11:31:05.511486] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.133 [2024-10-07 11:31:05.511503] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.133 [2024-10-07 11:31:05.511507] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511512] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.133 [2024-10-07 11:31:05.511523] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511528] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511532] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.133 [2024-10-07 11:31:05.511539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.133 [2024-10-07 11:31:05.511557] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.133 [2024-10-07 11:31:05.511607] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.133 [2024-10-07 11:31:05.511615] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.133 [2024-10-07 11:31:05.511618] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511623] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.133 [2024-10-07 11:31:05.511633] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511638] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511642] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.133 [2024-10-07 11:31:05.511650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.133 [2024-10-07 11:31:05.511666] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.133 [2024-10-07 11:31:05.511713] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.133 [2024-10-07 11:31:05.511720] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.133 [2024-10-07 11:31:05.511724] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.133 [2024-10-07 11:31:05.511728] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.133 [2024-10-07 11:31:05.511738] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.511743] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.511747] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.134 [2024-10-07 11:31:05.511754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.134 [2024-10-07 11:31:05.511770] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.134 [2024-10-07 11:31:05.511817] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.134 [2024-10-07 11:31:05.511824] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.134 [2024-10-07 11:31:05.511827] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.511831] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.134 [2024-10-07 11:31:05.511842] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.511847] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.511850] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.134 [2024-10-07 11:31:05.511858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.134 [2024-10-07 11:31:05.511874] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.134 [2024-10-07 11:31:05.511929] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.134 [2024-10-07 11:31:05.511936] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.134 [2024-10-07 11:31:05.511939] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.511943] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.134 [2024-10-07 11:31:05.511954] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.511959] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.511963] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.134 [2024-10-07 11:31:05.511970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.134 [2024-10-07 11:31:05.511986] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.134 [2024-10-07 11:31:05.512033] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.134 [2024-10-07 11:31:05.512043] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.134 [2024-10-07 11:31:05.512047] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512052] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.134 [2024-10-07 11:31:05.512063] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512067] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512071] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.134 [2024-10-07 11:31:05.512079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.134 [2024-10-07 11:31:05.512096] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.134 [2024-10-07 11:31:05.512142] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.134 [2024-10-07 11:31:05.512149] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.134 [2024-10-07 11:31:05.512153] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512157] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.134 [2024-10-07 11:31:05.512167] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512172] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512176] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.134 [2024-10-07 11:31:05.512183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.134 [2024-10-07 11:31:05.512199] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.134 [2024-10-07 11:31:05.512246] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.134 [2024-10-07 11:31:05.512252] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.134 [2024-10-07 11:31:05.512256] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512260] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.134 [2024-10-07 11:31:05.512271] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512275] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512279] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.134 [2024-10-07 11:31:05.512287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.134 [2024-10-07 11:31:05.512303] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.134 [2024-10-07 11:31:05.512366] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.134 [2024-10-07 11:31:05.512375] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.134 [2024-10-07 11:31:05.512378] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512383] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.134 [2024-10-07 11:31:05.512394] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512398] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512402] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.134 [2024-10-07 11:31:05.512410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.134 [2024-10-07 11:31:05.512429] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.134 [2024-10-07 11:31:05.512477] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.134 [2024-10-07 11:31:05.512483] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.134 [2024-10-07 11:31:05.512487] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512491] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.134 [2024-10-07 11:31:05.512501] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512506] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512510] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.134 [2024-10-07 11:31:05.512517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.134 [2024-10-07 11:31:05.512534] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.134 [2024-10-07 11:31:05.512578] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.134 [2024-10-07 11:31:05.512584] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.134 [2024-10-07 11:31:05.512588] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512592] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.134 [2024-10-07 11:31:05.512602] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512607] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512611] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.134 [2024-10-07 11:31:05.512618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.134 [2024-10-07 11:31:05.512636] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.134 [2024-10-07 11:31:05.512685] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.134 [2024-10-07 11:31:05.512691] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.134 [2024-10-07 11:31:05.512695] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512699] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.134 [2024-10-07 11:31:05.512710] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512714] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512718] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.134 [2024-10-07 11:31:05.512726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.134 [2024-10-07 11:31:05.512742] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.134 [2024-10-07 11:31:05.512785] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.134 [2024-10-07 11:31:05.512792] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.134 [2024-10-07 11:31:05.512796] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512800] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.134 [2024-10-07 11:31:05.512810] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512815] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512819] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.134 [2024-10-07 11:31:05.512826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.134 [2024-10-07 11:31:05.512843] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.134 [2024-10-07 11:31:05.512899] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.134 [2024-10-07 11:31:05.512905] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.134 [2024-10-07 11:31:05.512909] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512913] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.134 [2024-10-07 11:31:05.512924] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512928] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.512932] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.134 [2024-10-07 11:31:05.512939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.134 [2024-10-07 11:31:05.512959] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.134 [2024-10-07 11:31:05.513008] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.134 [2024-10-07 11:31:05.513015] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.134 [2024-10-07 11:31:05.513019] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.513023] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.134 [2024-10-07 11:31:05.513033] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.513038] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.134 [2024-10-07 11:31:05.513042] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.134 [2024-10-07 11:31:05.513049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.135 [2024-10-07 11:31:05.513065] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.135 [2024-10-07 11:31:05.513111] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.135 [2024-10-07 11:31:05.513118] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.135 [2024-10-07 11:31:05.513122] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513126] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.135 [2024-10-07 11:31:05.513136] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513141] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513145] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.135 [2024-10-07 11:31:05.513152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.135 [2024-10-07 11:31:05.513169] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.135 [2024-10-07 11:31:05.513213] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.135 [2024-10-07 11:31:05.513219] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.135 [2024-10-07 11:31:05.513223] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513227] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.135 [2024-10-07 11:31:05.513238] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513243] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513247] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.135 [2024-10-07 11:31:05.513254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.135 [2024-10-07 11:31:05.513270] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.135 [2024-10-07 11:31:05.513326] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.135 [2024-10-07 11:31:05.513334] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.135 [2024-10-07 11:31:05.513338] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513343] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.135 [2024-10-07 11:31:05.513354] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513359] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513362] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.135 [2024-10-07 11:31:05.513370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.135 [2024-10-07 11:31:05.513389] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.135 [2024-10-07 11:31:05.513437] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.135 [2024-10-07 11:31:05.513444] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.135 [2024-10-07 11:31:05.513447] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513451] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.135 [2024-10-07 11:31:05.513462] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513467] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513470] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.135 [2024-10-07 11:31:05.513478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.135 [2024-10-07 11:31:05.513494] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.135 [2024-10-07 11:31:05.513537] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.135 [2024-10-07 11:31:05.513544] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.135 [2024-10-07 11:31:05.513548] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513552] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.135 [2024-10-07 11:31:05.513563] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513567] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513571] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.135 [2024-10-07 11:31:05.513579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.135 [2024-10-07 11:31:05.513595] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.135 [2024-10-07 11:31:05.513642] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.135 [2024-10-07 11:31:05.513649] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.135 [2024-10-07 11:31:05.513652] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513656] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.135 [2024-10-07 11:31:05.513667] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513672] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513675] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.135 [2024-10-07 11:31:05.513683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.135 [2024-10-07 11:31:05.513699] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.135 [2024-10-07 11:31:05.513743] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.135 [2024-10-07 11:31:05.513749] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.135 [2024-10-07 11:31:05.513753] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513757] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.135 [2024-10-07 11:31:05.513767] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513772] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513776] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.135 [2024-10-07 11:31:05.513783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.135 [2024-10-07 11:31:05.513800] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.135 [2024-10-07 11:31:05.513845] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.135 [2024-10-07 11:31:05.513851] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.135 [2024-10-07 11:31:05.513856] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513860] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.135 [2024-10-07 11:31:05.513871] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513875] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513879] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.135 [2024-10-07 11:31:05.513887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.135 [2024-10-07 11:31:05.513903] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.135 [2024-10-07 11:31:05.513946] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.135 [2024-10-07 11:31:05.513953] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.135 [2024-10-07 11:31:05.513957] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513961] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.135 [2024-10-07 11:31:05.513971] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513976] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.513980] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.135 [2024-10-07 11:31:05.513987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.135 [2024-10-07 11:31:05.514004] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.135 [2024-10-07 11:31:05.514050] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.135 [2024-10-07 11:31:05.514057] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.135 [2024-10-07 11:31:05.514061] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.514065] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.135 [2024-10-07 11:31:05.514081] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.514086] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.514089] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.135 [2024-10-07 11:31:05.514097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.135 [2024-10-07 11:31:05.514113] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.135 [2024-10-07 11:31:05.514159] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.135 [2024-10-07 11:31:05.514166] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.135 [2024-10-07 11:31:05.514170] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.514174] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.135 [2024-10-07 11:31:05.514184] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.514189] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.514193] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.135 [2024-10-07 11:31:05.514201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.135 [2024-10-07 11:31:05.514217] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.135 [2024-10-07 11:31:05.514263] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.135 [2024-10-07 11:31:05.514270] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.135 [2024-10-07 11:31:05.514273] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.514278] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.135 [2024-10-07 11:31:05.514301] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.514307] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.135 [2024-10-07 11:31:05.514315] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9750) 00:20:10.135 [2024-10-07 11:31:05.518345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.135 [2024-10-07 11:31:05.518376] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3dcc0, cid 3, qid 0 00:20:10.135 [2024-10-07 11:31:05.518426] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.135 [2024-10-07 11:31:05.518434] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.135 [2024-10-07 11:31:05.518438] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.136 [2024-10-07 11:31:05.518442] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3dcc0) on tqpair=0x9d9750 00:20:10.136 [2024-10-07 11:31:05.518452] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:20:10.136 0% 00:20:10.136 Data Units Read: 0 00:20:10.136 Data Units Written: 0 00:20:10.136 Host Read Commands: 0 00:20:10.136 Host Write Commands: 0 00:20:10.136 Controller Busy Time: 0 minutes 00:20:10.136 Power Cycles: 0 00:20:10.136 Power On Hours: 0 hours 00:20:10.136 Unsafe Shutdowns: 0 00:20:10.136 Unrecoverable Media Errors: 0 00:20:10.136 Lifetime Error Log Entries: 0 00:20:10.136 Warning Temperature Time: 0 minutes 00:20:10.136 Critical Temperature Time: 0 minutes 00:20:10.136 00:20:10.136 Number of Queues 00:20:10.136 ================ 00:20:10.136 Number of I/O Submission Queues: 127 00:20:10.136 Number of I/O Completion Queues: 127 00:20:10.136 00:20:10.136 Active Namespaces 00:20:10.136 ================= 00:20:10.136 Namespace ID:1 00:20:10.136 Error Recovery Timeout: Unlimited 00:20:10.136 Command Set Identifier: NVM (00h) 00:20:10.136 Deallocate: Supported 00:20:10.136 Deallocated/Unwritten Error: Not Supported 00:20:10.136 Deallocated Read Value: Unknown 00:20:10.136 Deallocate in Write Zeroes: Not Supported 00:20:10.136 Deallocated Guard Field: 0xFFFF 00:20:10.136 Flush: Supported 00:20:10.136 Reservation: Supported 00:20:10.136 Namespace Sharing Capabilities: Multiple Controllers 00:20:10.136 Size (in LBAs): 131072 (0GiB) 00:20:10.136 Capacity (in LBAs): 131072 (0GiB) 00:20:10.136 Utilization (in LBAs): 131072 (0GiB) 00:20:10.136 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:10.136 EUI64: ABCDEF0123456789 00:20:10.136 UUID: 508e0c38-a02c-4601-afb4-8f78c29769ab 00:20:10.136 Thin Provisioning: Not Supported 00:20:10.136 Per-NS Atomic Units: Yes 00:20:10.136 Atomic Boundary Size (Normal): 0 00:20:10.136 Atomic Boundary Size (PFail): 0 00:20:10.136 Atomic Boundary Offset: 0 00:20:10.136 Maximum Single Source Range Length: 65535 00:20:10.136 Maximum Copy Length: 65535 00:20:10.136 Maximum Source Range Count: 1 00:20:10.136 NGUID/EUI64 Never Reused: No 00:20:10.136 Namespace Write Protected: No 00:20:10.136 Number of LBA Formats: 1 00:20:10.136 Current LBA Format: LBA Format #00 00:20:10.136 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:10.136 00:20:10.136 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:10.136 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:10.136 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.136 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:10.136 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.136 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:10.136 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:10.136 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:10.136 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:20:10.136 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:10.136 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:20:10.136 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:10.136 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:10.136 rmmod nvme_tcp 00:20:10.136 rmmod nvme_fabrics 00:20:10.136 rmmod nvme_keyring 00:20:10.394 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:10.394 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:20:10.394 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:20:10.394 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 74306 ']' 00:20:10.394 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 74306 00:20:10.394 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 74306 ']' 00:20:10.394 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 74306 00:20:10.394 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:20:10.394 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:10.394 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74306 00:20:10.394 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:10.394 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:10.394 killing process with pid 74306 00:20:10.394 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74306' 00:20:10.394 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 74306 00:20:10.394 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 74306 00:20:10.653 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:10.653 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:10.653 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:10.653 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:20:10.653 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:20:10.653 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:10.653 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:20:10.653 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:10.653 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:10.653 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:10.653 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:10.653 11:31:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:10.653 11:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:10.653 11:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:10.653 11:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:10.653 11:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:10.653 11:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:10.653 11:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:10.653 11:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:10.654 11:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:10.654 11:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:10.654 11:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:10.913 11:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:10.913 11:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.913 11:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:10.913 11:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.913 11:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:20:10.913 00:20:10.913 real 0m2.874s 00:20:10.913 user 0m7.080s 00:20:10.913 sys 0m0.781s 00:20:10.913 11:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:10.913 ************************************ 00:20:10.913 END TEST nvmf_identify 00:20:10.913 ************************************ 00:20:10.913 11:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:10.913 11:31:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:10.913 11:31:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:10.913 11:31:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:10.913 11:31:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.913 ************************************ 00:20:10.913 START TEST nvmf_perf 00:20:10.913 ************************************ 00:20:10.913 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:10.913 * Looking for test storage... 00:20:10.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:10.914 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:10.914 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:20:10.914 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:11.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.174 --rc genhtml_branch_coverage=1 00:20:11.174 --rc genhtml_function_coverage=1 00:20:11.174 --rc genhtml_legend=1 00:20:11.174 --rc geninfo_all_blocks=1 00:20:11.174 --rc geninfo_unexecuted_blocks=1 00:20:11.174 00:20:11.174 ' 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:11.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.174 --rc genhtml_branch_coverage=1 00:20:11.174 --rc genhtml_function_coverage=1 00:20:11.174 --rc genhtml_legend=1 00:20:11.174 --rc geninfo_all_blocks=1 00:20:11.174 --rc geninfo_unexecuted_blocks=1 00:20:11.174 00:20:11.174 ' 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:11.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.174 --rc genhtml_branch_coverage=1 00:20:11.174 --rc genhtml_function_coverage=1 00:20:11.174 --rc genhtml_legend=1 00:20:11.174 --rc geninfo_all_blocks=1 00:20:11.174 --rc geninfo_unexecuted_blocks=1 00:20:11.174 00:20:11.174 ' 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:11.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.174 --rc genhtml_branch_coverage=1 00:20:11.174 --rc genhtml_function_coverage=1 00:20:11.174 --rc genhtml_legend=1 00:20:11.174 --rc geninfo_all_blocks=1 00:20:11.174 --rc geninfo_unexecuted_blocks=1 00:20:11.174 00:20:11.174 ' 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:11.174 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:11.175 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # nvmf_veth_init 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:11.175 Cannot find device "nvmf_init_br" 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:11.175 Cannot find device "nvmf_init_br2" 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:11.175 Cannot find device "nvmf_tgt_br" 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:11.175 Cannot find device "nvmf_tgt_br2" 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:11.175 Cannot find device "nvmf_init_br" 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:11.175 Cannot find device "nvmf_init_br2" 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:11.175 Cannot find device "nvmf_tgt_br" 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:11.175 Cannot find device "nvmf_tgt_br2" 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:11.175 Cannot find device "nvmf_br" 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:11.175 Cannot find device "nvmf_init_if" 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:11.175 Cannot find device "nvmf_init_if2" 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:11.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:11.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:11.175 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:11.176 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:11.176 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:11.176 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:11.176 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:11.176 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:11.435 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:11.435 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:20:11.435 00:20:11.435 --- 10.0.0.3 ping statistics --- 00:20:11.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.435 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:11.435 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:11.435 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:20:11.435 00:20:11.435 --- 10.0.0.4 ping statistics --- 00:20:11.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.435 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:11.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:11.435 00:20:11.435 --- 10.0.0.1 ping statistics --- 00:20:11.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.435 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:11.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:20:11.435 00:20:11.435 --- 10.0.0.2 ping statistics --- 00:20:11.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.435 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # return 0 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:11.435 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:11.436 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:11.436 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:11.436 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:11.436 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=74563 00:20:11.436 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:11.436 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 74563 00:20:11.436 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 74563 ']' 00:20:11.436 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.436 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:11.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.436 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.436 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:11.436 11:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:11.694 [2024-10-07 11:31:06.979194] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:20:11.694 [2024-10-07 11:31:06.979312] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.694 [2024-10-07 11:31:07.121538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:11.952 [2024-10-07 11:31:07.255768] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.952 [2024-10-07 11:31:07.255834] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.952 [2024-10-07 11:31:07.255850] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.952 [2024-10-07 11:31:07.255861] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.952 [2024-10-07 11:31:07.255871] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.952 [2024-10-07 11:31:07.257215] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.952 [2024-10-07 11:31:07.257369] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.952 [2024-10-07 11:31:07.257443] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:20:11.953 [2024-10-07 11:31:07.257445] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.953 [2024-10-07 11:31:07.319610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:12.520 11:31:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:12.520 11:31:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:20:12.520 11:31:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:12.520 11:31:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:12.520 11:31:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:12.779 11:31:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.779 11:31:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:12.779 11:31:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:13.040 11:31:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:13.040 11:31:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:13.607 11:31:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:20:13.607 11:31:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:13.865 11:31:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:13.865 11:31:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:20:13.865 11:31:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:13.865 11:31:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:13.865 11:31:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:14.123 [2024-10-07 11:31:09.422542] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.124 11:31:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.382 11:31:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:14.382 11:31:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:14.641 11:31:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:14.641 11:31:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:14.899 11:31:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:15.158 [2024-10-07 11:31:10.516511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:15.158 11:31:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:15.416 11:31:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:15.416 11:31:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:15.416 11:31:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:15.416 11:31:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:16.792 Initializing NVMe Controllers 00:20:16.792 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:16.792 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:16.792 Initialization complete. Launching workers. 00:20:16.792 ======================================================== 00:20:16.792 Latency(us) 00:20:16.792 Device Information : IOPS MiB/s Average min max 00:20:16.792 PCIE (0000:00:10.0) NSID 1 from core 0: 22142.43 86.49 1444.72 314.61 6208.69 00:20:16.792 ======================================================== 00:20:16.792 Total : 22142.43 86.49 1444.72 314.61 6208.69 00:20:16.792 00:20:16.792 11:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:17.752 Initializing NVMe Controllers 00:20:17.753 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:17.753 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:17.753 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:17.753 Initialization complete. Launching workers. 00:20:17.753 ======================================================== 00:20:17.753 Latency(us) 00:20:17.753 Device Information : IOPS MiB/s Average min max 00:20:17.753 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3659.00 14.29 272.94 105.84 7174.58 00:20:17.753 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8112.16 5193.32 12039.77 00:20:17.753 ======================================================== 00:20:17.753 Total : 3783.00 14.78 529.90 105.84 12039.77 00:20:17.753 00:20:18.011 11:31:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:19.388 Initializing NVMe Controllers 00:20:19.388 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:19.388 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:19.388 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:19.388 Initialization complete. Launching workers. 00:20:19.388 ======================================================== 00:20:19.388 Latency(us) 00:20:19.388 Device Information : IOPS MiB/s Average min max 00:20:19.388 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8680.97 33.91 3687.76 586.55 7784.82 00:20:19.388 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4007.14 15.65 8002.38 6688.27 9389.68 00:20:19.388 ======================================================== 00:20:19.388 Total : 12688.11 49.56 5050.40 586.55 9389.68 00:20:19.388 00:20:19.388 11:31:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:19.388 11:31:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:21.926 Initializing NVMe Controllers 00:20:21.926 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:21.926 Controller IO queue size 128, less than required. 00:20:21.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:21.926 Controller IO queue size 128, less than required. 00:20:21.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:21.926 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:21.926 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:21.926 Initialization complete. Launching workers. 00:20:21.926 ======================================================== 00:20:21.926 Latency(us) 00:20:21.926 Device Information : IOPS MiB/s Average min max 00:20:21.926 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1614.34 403.59 80724.63 48993.07 126448.98 00:20:21.926 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 638.85 159.71 211046.56 56349.09 343369.97 00:20:21.926 ======================================================== 00:20:21.926 Total : 2253.20 563.30 117675.14 48993.07 343369.97 00:20:21.926 00:20:21.926 11:31:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:20:21.926 Initializing NVMe Controllers 00:20:21.926 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:21.926 Controller IO queue size 128, less than required. 00:20:21.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:21.926 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:21.926 Controller IO queue size 128, less than required. 00:20:21.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:21.926 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:21.926 WARNING: Some requested NVMe devices were skipped 00:20:21.926 No valid NVMe controllers or AIO or URING devices found 00:20:22.184 11:31:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:20:24.717 Initializing NVMe Controllers 00:20:24.717 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:24.717 Controller IO queue size 128, less than required. 00:20:24.717 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:24.717 Controller IO queue size 128, less than required. 00:20:24.717 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:24.717 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:24.717 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:24.717 Initialization complete. Launching workers. 00:20:24.717 00:20:24.717 ==================== 00:20:24.717 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:24.717 TCP transport: 00:20:24.717 polls: 8969 00:20:24.717 idle_polls: 5544 00:20:24.717 sock_completions: 3425 00:20:24.717 nvme_completions: 5963 00:20:24.717 submitted_requests: 8972 00:20:24.717 queued_requests: 1 00:20:24.717 00:20:24.717 ==================== 00:20:24.717 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:24.717 TCP transport: 00:20:24.717 polls: 9390 00:20:24.717 idle_polls: 4733 00:20:24.717 sock_completions: 4657 00:20:24.717 nvme_completions: 6179 00:20:24.717 submitted_requests: 9252 00:20:24.717 queued_requests: 1 00:20:24.717 ======================================================== 00:20:24.717 Latency(us) 00:20:24.717 Device Information : IOPS MiB/s Average min max 00:20:24.717 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1490.40 372.60 87478.71 36675.92 152282.90 00:20:24.717 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1544.40 386.10 82500.61 32080.46 132151.39 00:20:24.717 ======================================================== 00:20:24.717 Total : 3034.80 758.70 84945.37 32080.46 152282.90 00:20:24.717 00:20:24.717 11:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:24.717 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:24.975 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:24.975 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:24.975 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:24.975 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:24.975 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:20:24.975 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:24.975 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:20:24.975 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:24.975 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:24.975 rmmod nvme_tcp 00:20:24.975 rmmod nvme_fabrics 00:20:24.975 rmmod nvme_keyring 00:20:24.975 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:24.975 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:20:24.975 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:20:24.975 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 74563 ']' 00:20:24.975 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 74563 00:20:24.976 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 74563 ']' 00:20:24.976 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 74563 00:20:24.976 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:20:24.976 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:24.976 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74563 00:20:24.976 killing process with pid 74563 00:20:24.976 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:24.976 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:24.976 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74563' 00:20:24.976 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 74563 00:20:24.976 11:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 74563 00:20:25.542 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:25.542 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:25.542 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:25.542 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:20:25.542 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:20:25.542 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:25.542 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:20:25.542 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:25.542 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:25.542 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:25.542 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:20:25.801 ************************************ 00:20:25.801 END TEST nvmf_perf 00:20:25.801 ************************************ 00:20:25.801 00:20:25.801 real 0m15.012s 00:20:25.801 user 0m54.304s 00:20:25.801 sys 0m4.198s 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:25.801 11:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.061 ************************************ 00:20:26.061 START TEST nvmf_fio_host 00:20:26.061 ************************************ 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:26.061 * Looking for test storage... 00:20:26.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:20:26.061 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:26.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.062 --rc genhtml_branch_coverage=1 00:20:26.062 --rc genhtml_function_coverage=1 00:20:26.062 --rc genhtml_legend=1 00:20:26.062 --rc geninfo_all_blocks=1 00:20:26.062 --rc geninfo_unexecuted_blocks=1 00:20:26.062 00:20:26.062 ' 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:26.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.062 --rc genhtml_branch_coverage=1 00:20:26.062 --rc genhtml_function_coverage=1 00:20:26.062 --rc genhtml_legend=1 00:20:26.062 --rc geninfo_all_blocks=1 00:20:26.062 --rc geninfo_unexecuted_blocks=1 00:20:26.062 00:20:26.062 ' 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:26.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.062 --rc genhtml_branch_coverage=1 00:20:26.062 --rc genhtml_function_coverage=1 00:20:26.062 --rc genhtml_legend=1 00:20:26.062 --rc geninfo_all_blocks=1 00:20:26.062 --rc geninfo_unexecuted_blocks=1 00:20:26.062 00:20:26.062 ' 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:26.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.062 --rc genhtml_branch_coverage=1 00:20:26.062 --rc genhtml_function_coverage=1 00:20:26.062 --rc genhtml_legend=1 00:20:26.062 --rc geninfo_all_blocks=1 00:20:26.062 --rc geninfo_unexecuted_blocks=1 00:20:26.062 00:20:26.062 ' 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:26.062 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:20:26.062 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # nvmf_veth_init 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:26.063 Cannot find device "nvmf_init_br" 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:26.063 Cannot find device "nvmf_init_br2" 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:26.063 Cannot find device "nvmf_tgt_br" 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:20:26.063 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:26.322 Cannot find device "nvmf_tgt_br2" 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:26.322 Cannot find device "nvmf_init_br" 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:26.322 Cannot find device "nvmf_init_br2" 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:26.322 Cannot find device "nvmf_tgt_br" 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:26.322 Cannot find device "nvmf_tgt_br2" 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:26.322 Cannot find device "nvmf_br" 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:26.322 Cannot find device "nvmf_init_if" 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:26.322 Cannot find device "nvmf_init_if2" 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:26.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:26.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:26.322 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:26.581 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:26.581 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:26.581 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:26.581 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:26.581 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:26.581 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:26.581 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:26.581 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:26.581 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:26.581 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:26.581 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:26.581 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:26.581 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:20:26.581 00:20:26.581 --- 10.0.0.3 ping statistics --- 00:20:26.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.581 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:20:26.581 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:26.581 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:26.581 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:20:26.581 00:20:26.581 --- 10.0.0.4 ping statistics --- 00:20:26.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.581 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:20:26.581 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:26.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:26.581 00:20:26.581 --- 10.0.0.1 ping statistics --- 00:20:26.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.581 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:26.581 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:26.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:20:26.581 00:20:26.581 --- 10.0.0.2 ping statistics --- 00:20:26.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.581 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # return 0 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75029 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75029 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 75029 ']' 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:26.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:26.582 11:31:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.582 [2024-10-07 11:31:22.024509] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:20:26.582 [2024-10-07 11:31:22.024651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.841 [2024-10-07 11:31:22.173486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.841 [2024-10-07 11:31:22.300995] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.841 [2024-10-07 11:31:22.301077] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.841 [2024-10-07 11:31:22.301091] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.841 [2024-10-07 11:31:22.301102] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.841 [2024-10-07 11:31:22.301112] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.841 [2024-10-07 11:31:22.302438] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.841 [2024-10-07 11:31:22.302571] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.841 [2024-10-07 11:31:22.302706] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.841 [2024-10-07 11:31:22.302699] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.841 [2024-10-07 11:31:22.359406] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:27.776 11:31:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:27.776 11:31:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:20:27.776 11:31:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:27.776 [2024-10-07 11:31:23.279487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.035 11:31:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:28.035 11:31:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:28.035 11:31:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.035 11:31:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:28.293 Malloc1 00:20:28.293 11:31:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:28.551 11:31:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:28.808 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:29.066 [2024-10-07 11:31:24.415260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:29.066 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:29.324 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:29.324 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:20:29.324 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:20:29.324 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:29.324 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:29.324 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:29.324 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:29.324 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:29.325 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:29.325 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.325 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:29.325 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:29.325 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:29.325 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:29.325 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:29.325 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.325 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:29.325 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:29.325 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:29.325 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:29.325 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:29.325 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:29.325 11:31:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:20:29.583 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:29.583 fio-3.35 00:20:29.583 Starting 1 thread 00:20:32.115 00:20:32.115 test: (groupid=0, jobs=1): err= 0: pid=75112: Mon Oct 7 11:31:27 2024 00:20:32.115 read: IOPS=8813, BW=34.4MiB/s (36.1MB/s)(69.1MiB/2007msec) 00:20:32.115 slat (usec): min=2, max=223, avg= 2.47, stdev= 2.19 00:20:32.115 clat (usec): min=1727, max=14366, avg=7555.39, stdev=635.88 00:20:32.115 lat (usec): min=1760, max=14369, avg=7557.86, stdev=635.70 00:20:32.115 clat percentiles (usec): 00:20:32.115 | 1.00th=[ 6521], 5.00th=[ 6783], 10.00th=[ 6915], 20.00th=[ 7111], 00:20:32.115 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7635], 00:20:32.115 | 70.00th=[ 7767], 80.00th=[ 7898], 90.00th=[ 8160], 95.00th=[ 8455], 00:20:32.115 | 99.00th=[ 9896], 99.50th=[10421], 99.90th=[11994], 99.95th=[13435], 00:20:32.115 | 99.99th=[14353] 00:20:32.115 bw ( KiB/s): min=34816, max=35944, per=100.00%, avg=35256.00, stdev=487.10, samples=4 00:20:32.115 iops : min= 8704, max= 8986, avg=8814.00, stdev=121.78, samples=4 00:20:32.115 write: IOPS=8826, BW=34.5MiB/s (36.2MB/s)(69.2MiB/2007msec); 0 zone resets 00:20:32.115 slat (usec): min=2, max=166, avg= 2.61, stdev= 1.48 00:20:32.115 clat (usec): min=1621, max=13430, avg=6890.95, stdev=587.65 00:20:32.115 lat (usec): min=1631, max=13433, avg=6893.55, stdev=587.57 00:20:32.115 clat percentiles (usec): 00:20:32.115 | 1.00th=[ 5866], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6521], 00:20:32.115 | 30.00th=[ 6652], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:20:32.115 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7439], 95.00th=[ 7635], 00:20:32.115 | 99.00th=[ 9110], 99.50th=[ 9634], 99.90th=[11863], 99.95th=[12518], 00:20:32.115 | 99.99th=[13435] 00:20:32.115 bw ( KiB/s): min=34344, max=35968, per=99.98%, avg=35298.00, stdev=705.37, samples=4 00:20:32.115 iops : min= 8586, max= 8992, avg=8824.50, stdev=176.34, samples=4 00:20:32.115 lat (msec) : 2=0.03%, 4=0.13%, 10=99.27%, 20=0.57% 00:20:32.115 cpu : usr=70.44%, sys=22.68%, ctx=27, majf=0, minf=6 00:20:32.115 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:32.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:32.115 issued rwts: total=17689,17715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:32.115 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:32.115 00:20:32.115 Run status group 0 (all jobs): 00:20:32.115 READ: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.1MiB (72.5MB), run=2007-2007msec 00:20:32.115 WRITE: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=69.2MiB (72.6MB), run=2007-2007msec 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:32.115 11:31:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:20:32.115 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:32.115 fio-3.35 00:20:32.115 Starting 1 thread 00:20:34.646 00:20:34.646 test: (groupid=0, jobs=1): err= 0: pid=75161: Mon Oct 7 11:31:29 2024 00:20:34.646 read: IOPS=8246, BW=129MiB/s (135MB/s)(259MiB/2008msec) 00:20:34.646 slat (usec): min=3, max=117, avg= 3.75, stdev= 1.82 00:20:34.646 clat (usec): min=2962, max=17161, avg=8532.52, stdev=2552.40 00:20:34.646 lat (usec): min=2966, max=17164, avg=8536.26, stdev=2552.46 00:20:34.646 clat percentiles (usec): 00:20:34.646 | 1.00th=[ 4228], 5.00th=[ 4883], 10.00th=[ 5407], 20.00th=[ 6259], 00:20:34.646 | 30.00th=[ 6980], 40.00th=[ 7635], 50.00th=[ 8160], 60.00th=[ 8848], 00:20:34.646 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11994], 95.00th=[13304], 00:20:34.646 | 99.00th=[15795], 99.50th=[16188], 99.90th=[16450], 99.95th=[16712], 00:20:34.646 | 99.99th=[17171] 00:20:34.646 bw ( KiB/s): min=64224, max=72096, per=51.30%, avg=67688.00, stdev=3307.67, samples=4 00:20:34.646 iops : min= 4014, max= 4506, avg=4230.50, stdev=206.73, samples=4 00:20:34.646 write: IOPS=4718, BW=73.7MiB/s (77.3MB/s)(138MiB/1875msec); 0 zone resets 00:20:34.646 slat (usec): min=33, max=354, avg=38.64, stdev= 7.38 00:20:34.646 clat (usec): min=5552, max=20724, avg=12322.03, stdev=2256.73 00:20:34.646 lat (usec): min=5589, max=20761, avg=12360.67, stdev=2256.83 00:20:34.646 clat percentiles (usec): 00:20:34.646 | 1.00th=[ 7767], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10421], 00:20:34.647 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12125], 60.00th=[12780], 00:20:34.647 | 70.00th=[13435], 80.00th=[14222], 90.00th=[15270], 95.00th=[16057], 00:20:34.647 | 99.00th=[18744], 99.50th=[19268], 99.90th=[20055], 99.95th=[20317], 00:20:34.647 | 99.99th=[20841] 00:20:34.647 bw ( KiB/s): min=65824, max=75456, per=93.15%, avg=70320.00, stdev=4180.99, samples=4 00:20:34.647 iops : min= 4114, max= 4716, avg=4395.00, stdev=261.31, samples=4 00:20:34.647 lat (msec) : 4=0.40%, 10=52.50%, 20=47.06%, 50=0.04% 00:20:34.647 cpu : usr=82.77%, sys=13.55%, ctx=6, majf=0, minf=3 00:20:34.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:20:34.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:34.647 issued rwts: total=16559,8847,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:34.647 00:20:34.647 Run status group 0 (all jobs): 00:20:34.647 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (271MB), run=2008-2008msec 00:20:34.647 WRITE: bw=73.7MiB/s (77.3MB/s), 73.7MiB/s-73.7MiB/s (77.3MB/s-77.3MB/s), io=138MiB (145MB), run=1875-1875msec 00:20:34.647 11:31:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:34.647 11:31:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:34.647 11:31:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:34.647 11:31:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:34.647 11:31:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:34.647 11:31:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:34.647 11:31:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:20:34.647 11:31:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:34.647 11:31:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:20:34.647 11:31:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:34.647 11:31:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:34.647 rmmod nvme_tcp 00:20:34.647 rmmod nvme_fabrics 00:20:34.647 rmmod nvme_keyring 00:20:34.647 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:34.647 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:20:34.647 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:20:34.647 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 75029 ']' 00:20:34.647 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 75029 00:20:34.647 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 75029 ']' 00:20:34.647 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 75029 00:20:34.647 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:20:34.647 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:34.647 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75029 00:20:34.647 killing process with pid 75029 00:20:34.647 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:34.647 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:34.647 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75029' 00:20:34.647 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 75029 00:20:34.647 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 75029 00:20:34.906 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:34.906 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:34.906 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:34.906 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:20:34.906 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:20:34.906 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:34.906 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:20:34.906 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:34.906 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:34.906 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:34.906 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:34.906 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:34.906 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:34.906 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:20:35.164 00:20:35.164 real 0m9.269s 00:20:35.164 user 0m36.434s 00:20:35.164 sys 0m2.463s 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.164 ************************************ 00:20:35.164 END TEST nvmf_fio_host 00:20:35.164 ************************************ 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.164 ************************************ 00:20:35.164 START TEST nvmf_failover 00:20:35.164 ************************************ 00:20:35.164 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:35.423 * Looking for test storage... 00:20:35.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:35.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.423 --rc genhtml_branch_coverage=1 00:20:35.423 --rc genhtml_function_coverage=1 00:20:35.423 --rc genhtml_legend=1 00:20:35.423 --rc geninfo_all_blocks=1 00:20:35.423 --rc geninfo_unexecuted_blocks=1 00:20:35.423 00:20:35.423 ' 00:20:35.423 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:35.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.423 --rc genhtml_branch_coverage=1 00:20:35.423 --rc genhtml_function_coverage=1 00:20:35.423 --rc genhtml_legend=1 00:20:35.423 --rc geninfo_all_blocks=1 00:20:35.423 --rc geninfo_unexecuted_blocks=1 00:20:35.423 00:20:35.423 ' 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:35.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.424 --rc genhtml_branch_coverage=1 00:20:35.424 --rc genhtml_function_coverage=1 00:20:35.424 --rc genhtml_legend=1 00:20:35.424 --rc geninfo_all_blocks=1 00:20:35.424 --rc geninfo_unexecuted_blocks=1 00:20:35.424 00:20:35.424 ' 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:35.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.424 --rc genhtml_branch_coverage=1 00:20:35.424 --rc genhtml_function_coverage=1 00:20:35.424 --rc genhtml_legend=1 00:20:35.424 --rc geninfo_all_blocks=1 00:20:35.424 --rc geninfo_unexecuted_blocks=1 00:20:35.424 00:20:35.424 ' 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f4e03b1-7080-439e-b116-202a2cecf6a1 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8f4e03b1-7080-439e-b116-202a2cecf6a1 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:35.424 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # nvmf_veth_init 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:35.424 Cannot find device "nvmf_init_br" 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:35.424 Cannot find device "nvmf_init_br2" 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:35.424 Cannot find device "nvmf_tgt_br" 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:35.424 Cannot find device "nvmf_tgt_br2" 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:35.424 Cannot find device "nvmf_init_br" 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:35.424 Cannot find device "nvmf_init_br2" 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:20:35.424 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:35.682 Cannot find device "nvmf_tgt_br" 00:20:35.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:20:35.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:35.682 Cannot find device "nvmf_tgt_br2" 00:20:35.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:20:35.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:35.682 Cannot find device "nvmf_br" 00:20:35.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:20:35.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:35.682 Cannot find device "nvmf_init_if" 00:20:35.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:20:35.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:35.682 Cannot find device "nvmf_init_if2" 00:20:35.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:20:35.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:35.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:35.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:20:35.682 11:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:35.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:35.682 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:35.683 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:35.683 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:35.683 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:35.683 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:35.941 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:35.941 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:20:35.941 00:20:35.941 --- 10.0.0.3 ping statistics --- 00:20:35.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.941 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:35.941 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:35.941 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:20:35.941 00:20:35.941 --- 10.0.0.4 ping statistics --- 00:20:35.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.941 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:35.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:20:35.941 00:20:35.941 --- 10.0.0.1 ping statistics --- 00:20:35.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.941 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:35.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:20:35.941 00:20:35.941 --- 10.0.0.2 ping statistics --- 00:20:35.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.941 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # return 0 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=75427 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 75427 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75427 ']' 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:35.941 11:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:35.941 [2024-10-07 11:31:31.350176] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:20:35.941 [2024-10-07 11:31:31.350297] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.199 [2024-10-07 11:31:31.490282] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:36.199 [2024-10-07 11:31:31.617968] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.199 [2024-10-07 11:31:31.618039] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.199 [2024-10-07 11:31:31.618053] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.199 [2024-10-07 11:31:31.618064] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.199 [2024-10-07 11:31:31.618077] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.199 [2024-10-07 11:31:31.618743] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.199 [2024-10-07 11:31:31.618835] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.199 [2024-10-07 11:31:31.618842] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.199 [2024-10-07 11:31:31.675816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:37.134 11:31:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:37.134 11:31:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:37.134 11:31:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:37.134 11:31:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:37.134 11:31:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:37.134 11:31:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.134 11:31:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:37.392 [2024-10-07 11:31:32.771041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.392 11:31:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:37.650 Malloc0 00:20:37.650 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.908 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:38.490 11:31:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:38.490 [2024-10-07 11:31:33.989618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:38.490 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:38.748 [2024-10-07 11:31:34.249815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:38.748 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:20:39.314 [2024-10-07 11:31:34.530075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:20:39.314 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:39.314 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75490 00:20:39.314 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:39.314 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75490 /var/tmp/bdevperf.sock 00:20:39.314 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75490 ']' 00:20:39.314 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.314 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:39.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.314 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.314 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:39.314 11:31:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:40.246 11:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:40.246 11:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:40.246 11:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:40.503 NVMe0n1 00:20:40.503 11:31:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:41.069 NVMe0n1 00:20:41.069 11:31:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75514 00:20:41.069 11:31:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:41.069 11:31:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:42.001 11:31:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:42.259 [2024-10-07 11:31:37.651872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebabb0 is same with the state(6) to be set 00:20:42.259 [2024-10-07 11:31:37.651922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebabb0 is same with the state(6) to be set 00:20:42.259 [2024-10-07 11:31:37.651943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebabb0 is same with the state(6) to be set 00:20:42.259 [2024-10-07 11:31:37.651952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebabb0 is same with the state(6) to be set 00:20:42.259 [2024-10-07 11:31:37.651962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebabb0 is same with the state(6) to be set 00:20:42.259 11:31:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:45.548 11:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:45.548 NVMe0n1 00:20:45.548 11:31:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:46.114 [2024-10-07 11:31:41.381287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f15cf0 is same with the state(6) to be set 00:20:46.114 [2024-10-07 11:31:41.381346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f15cf0 is same with the state(6) to be set 00:20:46.114 [2024-10-07 11:31:41.381358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f15cf0 is same with the state(6) to be set 00:20:46.114 [2024-10-07 11:31:41.381367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f15cf0 is same with the state(6) to be set 00:20:46.114 [2024-10-07 11:31:41.381376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f15cf0 is same with the state(6) to be set 00:20:46.114 11:31:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:49.396 11:31:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:49.396 [2024-10-07 11:31:44.672852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:49.396 11:31:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:50.331 11:31:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:20:50.597 11:31:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75514 00:20:57.221 { 00:20:57.221 "results": [ 00:20:57.221 { 00:20:57.221 "job": "NVMe0n1", 00:20:57.221 "core_mask": "0x1", 00:20:57.221 "workload": "verify", 00:20:57.221 "status": "finished", 00:20:57.221 "verify_range": { 00:20:57.221 "start": 0, 00:20:57.221 "length": 16384 00:20:57.221 }, 00:20:57.221 "queue_depth": 128, 00:20:57.221 "io_size": 4096, 00:20:57.221 "runtime": 15.009957, 00:20:57.221 "iops": 8976.108325959895, 00:20:57.221 "mibps": 35.06292314828084, 00:20:57.221 "io_failed": 0, 00:20:57.221 "io_timeout": 0, 00:20:57.221 "avg_latency_us": 14227.395408790986, 00:20:57.221 "min_latency_us": 1459.6654545454546, 00:20:57.221 "max_latency_us": 17992.61090909091 00:20:57.221 } 00:20:57.221 ], 00:20:57.221 "core_count": 1 00:20:57.221 } 00:20:57.221 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75490 00:20:57.221 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75490 ']' 00:20:57.221 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75490 00:20:57.221 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:57.221 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:57.221 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75490 00:20:57.221 killing process with pid 75490 00:20:57.221 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:57.221 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:57.221 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75490' 00:20:57.221 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75490 00:20:57.221 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75490 00:20:57.221 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:57.221 [2024-10-07 11:31:34.593419] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:20:57.221 [2024-10-07 11:31:34.593522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75490 ] 00:20:57.221 [2024-10-07 11:31:34.728619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.221 [2024-10-07 11:31:34.843771] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.221 [2024-10-07 11:31:34.897824] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:57.221 Running I/O for 15 seconds... 00:20:57.221 6820.00 IOPS, 26.64 MiB/s [2024-10-07T11:31:52.744Z] [2024-10-07 11:31:37.651762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.221 [2024-10-07 11:31:37.651867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.221 [2024-10-07 11:31:37.651891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.221 [2024-10-07 11:31:37.651906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.221 [2024-10-07 11:31:37.651921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.221 [2024-10-07 11:31:37.651943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.221 [2024-10-07 11:31:37.651959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.221 [2024-10-07 11:31:37.651973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.221 [2024-10-07 11:31:37.651988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.221 [2024-10-07 11:31:37.652066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.221 [2024-10-07 11:31:37.652090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.221 [2024-10-07 11:31:37.652114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.221 [2024-10-07 11:31:37.652131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.221 [2024-10-07 11:31:37.652148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.221 [2024-10-07 11:31:37.652162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.221 [2024-10-07 11:31:37.652178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.221 [2024-10-07 11:31:37.652192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.221 [2024-10-07 11:31:37.652208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.221 [2024-10-07 11:31:37.652223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.221 [2024-10-07 11:31:37.652239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.221 [2024-10-07 11:31:37.652253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.221 [2024-10-07 11:31:37.652299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.652330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.652364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.652394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.652424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.652463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.652495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.652526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.652556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.652586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.652616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.652646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.652676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.652716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.652747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.652778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.652808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.652839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.652869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.652899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.652930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.652965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.652981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.652995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.653026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.653056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.653086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.653124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.653165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.653194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.653225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.222 [2024-10-07 11:31:37.653255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.653286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.653327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.653362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.653392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.653422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.653453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.653489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.653522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.653569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.653600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.653630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.222 [2024-10-07 11:31:37.653660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.222 [2024-10-07 11:31:37.653677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.653691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.653707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.653722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.653737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.653752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.653768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.653783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.653799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.653813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.653829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.653844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.653860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.653874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.653890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.653905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.653921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.653942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.653959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.653973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.653994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.654009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.654040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.223 [2024-10-07 11:31:37.654070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.223 [2024-10-07 11:31:37.654100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.223 [2024-10-07 11:31:37.654130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.223 [2024-10-07 11:31:37.654161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.223 [2024-10-07 11:31:37.654191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.223 [2024-10-07 11:31:37.654227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.223 [2024-10-07 11:31:37.654257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.223 [2024-10-07 11:31:37.654297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.654343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.654384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.654414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.654444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.654474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.654504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.654540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.654571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.654602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.654633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.654665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.654696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.654727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.654757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.654795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.223 [2024-10-07 11:31:37.654828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.223 [2024-10-07 11:31:37.654867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.223 [2024-10-07 11:31:37.654899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.223 [2024-10-07 11:31:37.654929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.223 [2024-10-07 11:31:37.654959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.223 [2024-10-07 11:31:37.654975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.223 [2024-10-07 11:31:37.654989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.224 [2024-10-07 11:31:37.655019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.224 [2024-10-07 11:31:37.655054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.224 [2024-10-07 11:31:37.655084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.224 [2024-10-07 11:31:37.655731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.224 [2024-10-07 11:31:37.655761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.224 [2024-10-07 11:31:37.655790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.224 [2024-10-07 11:31:37.655820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.224 [2024-10-07 11:31:37.655850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.224 [2024-10-07 11:31:37.655885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.224 [2024-10-07 11:31:37.655915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.224 [2024-10-07 11:31:37.655945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.224 [2024-10-07 11:31:37.655981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.655998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.224 [2024-10-07 11:31:37.656012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.656028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.224 [2024-10-07 11:31:37.656042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.656063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.224 [2024-10-07 11:31:37.656078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.656094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.224 [2024-10-07 11:31:37.656108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.656124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.224 [2024-10-07 11:31:37.656138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.656154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.224 [2024-10-07 11:31:37.656168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.656184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.224 [2024-10-07 11:31:37.656198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.656256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.224 [2024-10-07 11:31:37.656273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.224 [2024-10-07 11:31:37.656285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66296 len:8 PRP1 0x0 PRP2 0x0 00:20:57.224 [2024-10-07 11:31:37.656299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.224 [2024-10-07 11:31:37.656372] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe31770 was disconnected and freed. reset controller. 00:20:57.224 [2024-10-07 11:31:37.657441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.224 [2024-10-07 11:31:37.657502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.224 [2024-10-07 11:31:37.657847] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.224 [2024-10-07 11:31:37.657881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.224 [2024-10-07 11:31:37.657899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.225 [2024-10-07 11:31:37.657951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.225 [2024-10-07 11:31:37.657995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.225 [2024-10-07 11:31:37.658028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.225 [2024-10-07 11:31:37.658045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.225 [2024-10-07 11:31:37.658079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.225 [2024-10-07 11:31:37.668379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.225 [2024-10-07 11:31:37.668526] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.225 [2024-10-07 11:31:37.668561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.225 [2024-10-07 11:31:37.668580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.225 [2024-10-07 11:31:37.668630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.225 [2024-10-07 11:31:37.668668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.225 [2024-10-07 11:31:37.668687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.225 [2024-10-07 11:31:37.668702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.225 [2024-10-07 11:31:37.668733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.225 [2024-10-07 11:31:37.678465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.225 [2024-10-07 11:31:37.678583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.225 [2024-10-07 11:31:37.678615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.225 [2024-10-07 11:31:37.678632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.225 [2024-10-07 11:31:37.678664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.225 [2024-10-07 11:31:37.678697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.225 [2024-10-07 11:31:37.678715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.225 [2024-10-07 11:31:37.678729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.225 [2024-10-07 11:31:37.678758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.225 [2024-10-07 11:31:37.690321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.225 [2024-10-07 11:31:37.690455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.225 [2024-10-07 11:31:37.690488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.225 [2024-10-07 11:31:37.690505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.225 [2024-10-07 11:31:37.690538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.225 [2024-10-07 11:31:37.690570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.225 [2024-10-07 11:31:37.690588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.225 [2024-10-07 11:31:37.690602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.225 [2024-10-07 11:31:37.690633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.225 [2024-10-07 11:31:37.700465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.225 [2024-10-07 11:31:37.700617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.225 [2024-10-07 11:31:37.700649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.225 [2024-10-07 11:31:37.700667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.225 [2024-10-07 11:31:37.700699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.225 [2024-10-07 11:31:37.700732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.225 [2024-10-07 11:31:37.700750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.225 [2024-10-07 11:31:37.700764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.225 [2024-10-07 11:31:37.700794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.225 [2024-10-07 11:31:37.711464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.225 [2024-10-07 11:31:37.711730] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.225 [2024-10-07 11:31:37.711791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.225 [2024-10-07 11:31:37.711828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.225 [2024-10-07 11:31:37.713393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.225 [2024-10-07 11:31:37.714855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.225 [2024-10-07 11:31:37.714919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.225 [2024-10-07 11:31:37.714953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.225 [2024-10-07 11:31:37.715216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.225 [2024-10-07 11:31:37.721584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.225 [2024-10-07 11:31:37.721863] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.225 [2024-10-07 11:31:37.721909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.225 [2024-10-07 11:31:37.721929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.225 [2024-10-07 11:31:37.722063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.225 [2024-10-07 11:31:37.722189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.225 [2024-10-07 11:31:37.722220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.225 [2024-10-07 11:31:37.722236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.225 [2024-10-07 11:31:37.722337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.225 [2024-10-07 11:31:37.732995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.225 [2024-10-07 11:31:37.733117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.225 [2024-10-07 11:31:37.733149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.225 [2024-10-07 11:31:37.733167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.225 [2024-10-07 11:31:37.733219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.225 [2024-10-07 11:31:37.733252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.225 [2024-10-07 11:31:37.733271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.225 [2024-10-07 11:31:37.733285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.225 [2024-10-07 11:31:37.733330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.225 [2024-10-07 11:31:37.743092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.225 [2024-10-07 11:31:37.743209] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.225 [2024-10-07 11:31:37.743241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.225 [2024-10-07 11:31:37.743258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.225 [2024-10-07 11:31:37.743290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.225 [2024-10-07 11:31:37.743339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.225 [2024-10-07 11:31:37.743360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.225 [2024-10-07 11:31:37.743375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.225 [2024-10-07 11:31:37.743406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.225 [2024-10-07 11:31:37.753712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.225 [2024-10-07 11:31:37.753837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.225 [2024-10-07 11:31:37.753869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.225 [2024-10-07 11:31:37.753886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.225 [2024-10-07 11:31:37.753918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.226 [2024-10-07 11:31:37.753951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.226 [2024-10-07 11:31:37.753968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.226 [2024-10-07 11:31:37.753983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.226 [2024-10-07 11:31:37.754013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.226 [2024-10-07 11:31:37.763808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.226 [2024-10-07 11:31:37.764093] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.226 [2024-10-07 11:31:37.764138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.226 [2024-10-07 11:31:37.764158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.226 [2024-10-07 11:31:37.764292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.226 [2024-10-07 11:31:37.764434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.226 [2024-10-07 11:31:37.764462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.226 [2024-10-07 11:31:37.764492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.226 [2024-10-07 11:31:37.764552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.226 [2024-10-07 11:31:37.775120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.226 [2024-10-07 11:31:37.775239] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.226 [2024-10-07 11:31:37.775271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.226 [2024-10-07 11:31:37.775288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.226 [2024-10-07 11:31:37.775337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.226 [2024-10-07 11:31:37.775373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.226 [2024-10-07 11:31:37.775391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.226 [2024-10-07 11:31:37.775405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.226 [2024-10-07 11:31:37.775436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.226 [2024-10-07 11:31:37.785215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.226 [2024-10-07 11:31:37.785344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.226 [2024-10-07 11:31:37.785377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.226 [2024-10-07 11:31:37.785394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.226 [2024-10-07 11:31:37.785427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.226 [2024-10-07 11:31:37.785459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.226 [2024-10-07 11:31:37.785476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.226 [2024-10-07 11:31:37.785490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.226 [2024-10-07 11:31:37.785520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.226 [2024-10-07 11:31:37.795969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.226 [2024-10-07 11:31:37.796109] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.226 [2024-10-07 11:31:37.796140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.226 [2024-10-07 11:31:37.796159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.226 [2024-10-07 11:31:37.796191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.226 [2024-10-07 11:31:37.796222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.226 [2024-10-07 11:31:37.796240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.226 [2024-10-07 11:31:37.796254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.226 [2024-10-07 11:31:37.796285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.226 [2024-10-07 11:31:37.806063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.226 [2024-10-07 11:31:37.806362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.226 [2024-10-07 11:31:37.806411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.226 [2024-10-07 11:31:37.806430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.226 [2024-10-07 11:31:37.806564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.226 [2024-10-07 11:31:37.806700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.226 [2024-10-07 11:31:37.806724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.226 [2024-10-07 11:31:37.806739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.226 [2024-10-07 11:31:37.806795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.226 [2024-10-07 11:31:37.817330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.226 [2024-10-07 11:31:37.817449] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.226 [2024-10-07 11:31:37.817481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.226 [2024-10-07 11:31:37.817498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.226 [2024-10-07 11:31:37.817529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.226 [2024-10-07 11:31:37.817561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.226 [2024-10-07 11:31:37.817580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.226 [2024-10-07 11:31:37.817594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.226 [2024-10-07 11:31:37.817624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.226 [2024-10-07 11:31:37.827439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.226 [2024-10-07 11:31:37.827555] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.226 [2024-10-07 11:31:37.827586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.226 [2024-10-07 11:31:37.827603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.226 [2024-10-07 11:31:37.827635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.226 [2024-10-07 11:31:37.827668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.226 [2024-10-07 11:31:37.827685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.226 [2024-10-07 11:31:37.827699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.226 [2024-10-07 11:31:37.827729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.226 [2024-10-07 11:31:37.838222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.226 [2024-10-07 11:31:37.838369] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.226 [2024-10-07 11:31:37.838402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.226 [2024-10-07 11:31:37.838419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.226 [2024-10-07 11:31:37.838454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.226 [2024-10-07 11:31:37.838502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.226 [2024-10-07 11:31:37.838522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.226 [2024-10-07 11:31:37.838536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.226 [2024-10-07 11:31:37.838567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.226 [2024-10-07 11:31:37.848311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.226 [2024-10-07 11:31:37.848439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.226 [2024-10-07 11:31:37.848470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.226 [2024-10-07 11:31:37.848488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.226 [2024-10-07 11:31:37.848673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.226 [2024-10-07 11:31:37.848826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.226 [2024-10-07 11:31:37.848852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.226 [2024-10-07 11:31:37.848866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.226 [2024-10-07 11:31:37.848981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.226 [2024-10-07 11:31:37.859672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.226 [2024-10-07 11:31:37.859798] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.226 [2024-10-07 11:31:37.859830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.226 [2024-10-07 11:31:37.859847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.226 [2024-10-07 11:31:37.859878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.226 [2024-10-07 11:31:37.859910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.226 [2024-10-07 11:31:37.859928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.226 [2024-10-07 11:31:37.859941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.226 [2024-10-07 11:31:37.859971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.226 [2024-10-07 11:31:37.869797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.226 [2024-10-07 11:31:37.869914] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.226 [2024-10-07 11:31:37.869945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.226 [2024-10-07 11:31:37.869962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.227 [2024-10-07 11:31:37.869994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.227 [2024-10-07 11:31:37.870026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.227 [2024-10-07 11:31:37.870044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.227 [2024-10-07 11:31:37.870058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.227 [2024-10-07 11:31:37.870104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.227 [2024-10-07 11:31:37.880312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.227 [2024-10-07 11:31:37.880464] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.227 [2024-10-07 11:31:37.880495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.227 [2024-10-07 11:31:37.880513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.227 [2024-10-07 11:31:37.880545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.227 [2024-10-07 11:31:37.880578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.227 [2024-10-07 11:31:37.880596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.227 [2024-10-07 11:31:37.880610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.227 [2024-10-07 11:31:37.880640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.227 [2024-10-07 11:31:37.890418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.227 [2024-10-07 11:31:37.890533] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.227 [2024-10-07 11:31:37.890564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.227 [2024-10-07 11:31:37.890581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.227 [2024-10-07 11:31:37.890613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.227 [2024-10-07 11:31:37.890644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.227 [2024-10-07 11:31:37.890662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.227 [2024-10-07 11:31:37.890677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.227 [2024-10-07 11:31:37.890724] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.227 [2024-10-07 11:31:37.901804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.227 [2024-10-07 11:31:37.901922] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.227 [2024-10-07 11:31:37.901954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.227 [2024-10-07 11:31:37.901971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.227 [2024-10-07 11:31:37.902003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.227 [2024-10-07 11:31:37.902035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.227 [2024-10-07 11:31:37.902053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.227 [2024-10-07 11:31:37.902067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.227 [2024-10-07 11:31:37.902097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.227 [2024-10-07 11:31:37.911895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.227 [2024-10-07 11:31:37.912010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.227 [2024-10-07 11:31:37.912041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.227 [2024-10-07 11:31:37.912079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.227 [2024-10-07 11:31:37.912113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.227 [2024-10-07 11:31:37.912160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.227 [2024-10-07 11:31:37.912180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.227 [2024-10-07 11:31:37.912195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.227 [2024-10-07 11:31:37.913407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.227 [2024-10-07 11:31:37.922270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.227 [2024-10-07 11:31:37.922426] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.227 [2024-10-07 11:31:37.922459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.227 [2024-10-07 11:31:37.922476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.227 [2024-10-07 11:31:37.922509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.227 [2024-10-07 11:31:37.922540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.227 [2024-10-07 11:31:37.922558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.227 [2024-10-07 11:31:37.922572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.227 [2024-10-07 11:31:37.922603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.227 [2024-10-07 11:31:37.932394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.227 [2024-10-07 11:31:37.932674] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.227 [2024-10-07 11:31:37.932718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.227 [2024-10-07 11:31:37.932737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.227 [2024-10-07 11:31:37.932872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.227 [2024-10-07 11:31:37.932997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.227 [2024-10-07 11:31:37.933022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.227 [2024-10-07 11:31:37.933038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.227 [2024-10-07 11:31:37.933093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.227 [2024-10-07 11:31:37.943415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.227 [2024-10-07 11:31:37.943532] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.227 [2024-10-07 11:31:37.943564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.227 [2024-10-07 11:31:37.943581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.227 [2024-10-07 11:31:37.943613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.227 [2024-10-07 11:31:37.943646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.227 [2024-10-07 11:31:37.943678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.227 [2024-10-07 11:31:37.943693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.227 [2024-10-07 11:31:37.943725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.227 [2024-10-07 11:31:37.953506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.227 [2024-10-07 11:31:37.953622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.227 [2024-10-07 11:31:37.953653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.227 [2024-10-07 11:31:37.953671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.227 [2024-10-07 11:31:37.954874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.227 [2024-10-07 11:31:37.955137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.227 [2024-10-07 11:31:37.955174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.227 [2024-10-07 11:31:37.955192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.227 [2024-10-07 11:31:37.956014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.227 [2024-10-07 11:31:37.963609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.227 [2024-10-07 11:31:37.963723] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.227 [2024-10-07 11:31:37.963755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.227 [2024-10-07 11:31:37.963772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.227 [2024-10-07 11:31:37.963803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.227 [2024-10-07 11:31:37.963835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.227 [2024-10-07 11:31:37.963852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.227 [2024-10-07 11:31:37.963866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.227 [2024-10-07 11:31:37.963896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.227 [2024-10-07 11:31:37.973958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.227 [2024-10-07 11:31:37.974163] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.227 [2024-10-07 11:31:37.974196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.227 [2024-10-07 11:31:37.974214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.227 [2024-10-07 11:31:37.974268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.227 [2024-10-07 11:31:37.974334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.227 [2024-10-07 11:31:37.974356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.227 [2024-10-07 11:31:37.974371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.227 [2024-10-07 11:31:37.974402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.227 [2024-10-07 11:31:37.984556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.227 [2024-10-07 11:31:37.984682] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.227 [2024-10-07 11:31:37.984714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.228 [2024-10-07 11:31:37.984731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.228 [2024-10-07 11:31:37.984763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.228 [2024-10-07 11:31:37.984795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.228 [2024-10-07 11:31:37.984813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.228 [2024-10-07 11:31:37.984828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.228 [2024-10-07 11:31:37.984857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.228 [2024-10-07 11:31:37.994661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.228 [2024-10-07 11:31:37.994778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.228 [2024-10-07 11:31:37.994810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.228 [2024-10-07 11:31:37.994827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.228 [2024-10-07 11:31:37.996031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.228 [2024-10-07 11:31:37.996278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.228 [2024-10-07 11:31:37.996327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.228 [2024-10-07 11:31:37.996347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.228 [2024-10-07 11:31:37.997151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.228 [2024-10-07 11:31:38.004749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.228 [2024-10-07 11:31:38.004863] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.228 [2024-10-07 11:31:38.004895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.228 [2024-10-07 11:31:38.004912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.228 [2024-10-07 11:31:38.004944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.228 [2024-10-07 11:31:38.004975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.228 [2024-10-07 11:31:38.004994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.228 [2024-10-07 11:31:38.005008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.228 [2024-10-07 11:31:38.005038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.228 [2024-10-07 11:31:38.014839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.228 [2024-10-07 11:31:38.015114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.228 [2024-10-07 11:31:38.015157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.228 [2024-10-07 11:31:38.015177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.228 [2024-10-07 11:31:38.015345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.228 [2024-10-07 11:31:38.015484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.228 [2024-10-07 11:31:38.015518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.228 [2024-10-07 11:31:38.015535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.228 [2024-10-07 11:31:38.015591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.228 [2024-10-07 11:31:38.025814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.228 [2024-10-07 11:31:38.025933] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.228 [2024-10-07 11:31:38.025964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.228 [2024-10-07 11:31:38.025981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.228 [2024-10-07 11:31:38.026019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.228 [2024-10-07 11:31:38.026051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.228 [2024-10-07 11:31:38.026069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.228 [2024-10-07 11:31:38.026084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.228 [2024-10-07 11:31:38.026114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.228 [2024-10-07 11:31:38.035906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.228 [2024-10-07 11:31:38.036032] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.228 [2024-10-07 11:31:38.036064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.228 [2024-10-07 11:31:38.036081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.228 [2024-10-07 11:31:38.036113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.228 [2024-10-07 11:31:38.036145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.228 [2024-10-07 11:31:38.036163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.228 [2024-10-07 11:31:38.036177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.228 [2024-10-07 11:31:38.037374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.228 [2024-10-07 11:31:38.046089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.228 [2024-10-07 11:31:38.046202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.228 [2024-10-07 11:31:38.046234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.228 [2024-10-07 11:31:38.046250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.228 [2024-10-07 11:31:38.046282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.228 [2024-10-07 11:31:38.046343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.228 [2024-10-07 11:31:38.046364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.228 [2024-10-07 11:31:38.046406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.228 [2024-10-07 11:31:38.046438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.228 [2024-10-07 11:31:38.056180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.228 [2024-10-07 11:31:38.056465] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.228 [2024-10-07 11:31:38.056509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.228 [2024-10-07 11:31:38.056529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.228 [2024-10-07 11:31:38.056662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.228 [2024-10-07 11:31:38.056788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.228 [2024-10-07 11:31:38.056823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.228 [2024-10-07 11:31:38.056839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.228 [2024-10-07 11:31:38.056897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.228 [2024-10-07 11:31:38.067167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.228 [2024-10-07 11:31:38.067283] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.228 [2024-10-07 11:31:38.067330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.228 [2024-10-07 11:31:38.067351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.228 [2024-10-07 11:31:38.067391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.228 [2024-10-07 11:31:38.067422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.228 [2024-10-07 11:31:38.067440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.228 [2024-10-07 11:31:38.067454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.228 [2024-10-07 11:31:38.067484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.228 [2024-10-07 11:31:38.077261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.228 [2024-10-07 11:31:38.077394] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.228 [2024-10-07 11:31:38.077427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.228 [2024-10-07 11:31:38.077445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.228 [2024-10-07 11:31:38.077476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.228 [2024-10-07 11:31:38.078691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.228 [2024-10-07 11:31:38.078731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.228 [2024-10-07 11:31:38.078750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.228 [2024-10-07 11:31:38.078976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.228 [2024-10-07 11:31:38.087398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.228 [2024-10-07 11:31:38.087514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.228 [2024-10-07 11:31:38.087567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.228 [2024-10-07 11:31:38.087586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.228 [2024-10-07 11:31:38.087618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.228 [2024-10-07 11:31:38.087651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.228 [2024-10-07 11:31:38.087669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.228 [2024-10-07 11:31:38.087683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.228 [2024-10-07 11:31:38.087713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.228 [2024-10-07 11:31:38.097735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.229 [2024-10-07 11:31:38.097940] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.229 [2024-10-07 11:31:38.097974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.229 [2024-10-07 11:31:38.097991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.229 [2024-10-07 11:31:38.098046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.229 [2024-10-07 11:31:38.098083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.229 [2024-10-07 11:31:38.098101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.229 [2024-10-07 11:31:38.098115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.229 [2024-10-07 11:31:38.098146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.229 [2024-10-07 11:31:38.108440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.229 [2024-10-07 11:31:38.108561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.229 [2024-10-07 11:31:38.108593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.229 [2024-10-07 11:31:38.108611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.229 [2024-10-07 11:31:38.108642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.229 [2024-10-07 11:31:38.108675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.229 [2024-10-07 11:31:38.108693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.229 [2024-10-07 11:31:38.108708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.229 [2024-10-07 11:31:38.108738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.229 [2024-10-07 11:31:38.118536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.229 [2024-10-07 11:31:38.118651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.229 [2024-10-07 11:31:38.118682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.229 [2024-10-07 11:31:38.118700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.229 [2024-10-07 11:31:38.118732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.229 [2024-10-07 11:31:38.119943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.229 [2024-10-07 11:31:38.119982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.229 [2024-10-07 11:31:38.120000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.229 [2024-10-07 11:31:38.120241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.229 [2024-10-07 11:31:38.128739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.229 [2024-10-07 11:31:38.128856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.229 [2024-10-07 11:31:38.128888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.229 [2024-10-07 11:31:38.128905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.229 [2024-10-07 11:31:38.128937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.229 [2024-10-07 11:31:38.128971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.229 [2024-10-07 11:31:38.128988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.229 [2024-10-07 11:31:38.129002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.229 [2024-10-07 11:31:38.129033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.229 [2024-10-07 11:31:38.138832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.229 [2024-10-07 11:31:38.139101] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.229 [2024-10-07 11:31:38.139145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.229 [2024-10-07 11:31:38.139165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.229 [2024-10-07 11:31:38.139297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.229 [2024-10-07 11:31:38.139443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.229 [2024-10-07 11:31:38.139469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.229 [2024-10-07 11:31:38.139484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.229 [2024-10-07 11:31:38.139540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.229 [2024-10-07 11:31:38.149795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.229 [2024-10-07 11:31:38.149911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.229 [2024-10-07 11:31:38.149943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.229 [2024-10-07 11:31:38.149960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.229 [2024-10-07 11:31:38.149992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.229 [2024-10-07 11:31:38.150024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.229 [2024-10-07 11:31:38.150041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.229 [2024-10-07 11:31:38.150056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.229 [2024-10-07 11:31:38.150103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.229 [2024-10-07 11:31:38.159886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.229 [2024-10-07 11:31:38.160002] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.229 [2024-10-07 11:31:38.160033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.229 [2024-10-07 11:31:38.160051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.229 [2024-10-07 11:31:38.160082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.229 [2024-10-07 11:31:38.160114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.229 [2024-10-07 11:31:38.160131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.229 [2024-10-07 11:31:38.160146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.229 [2024-10-07 11:31:38.160176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.229 [2024-10-07 11:31:38.170120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.229 [2024-10-07 11:31:38.170237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.229 [2024-10-07 11:31:38.170269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.229 [2024-10-07 11:31:38.170297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.229 [2024-10-07 11:31:38.170345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.229 [2024-10-07 11:31:38.170382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.229 [2024-10-07 11:31:38.170401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.229 [2024-10-07 11:31:38.170415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.229 [2024-10-07 11:31:38.170460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.229 [2024-10-07 11:31:38.180215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.229 [2024-10-07 11:31:38.180345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.229 [2024-10-07 11:31:38.180376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.229 [2024-10-07 11:31:38.180393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.229 [2024-10-07 11:31:38.180579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.229 [2024-10-07 11:31:38.180739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.229 [2024-10-07 11:31:38.180772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.229 [2024-10-07 11:31:38.180790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.229 [2024-10-07 11:31:38.180909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.229 [2024-10-07 11:31:38.191292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.229 [2024-10-07 11:31:38.191429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.229 [2024-10-07 11:31:38.191461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.229 [2024-10-07 11:31:38.191510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.229 [2024-10-07 11:31:38.191545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.229 [2024-10-07 11:31:38.191577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.229 [2024-10-07 11:31:38.191595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.229 [2024-10-07 11:31:38.191610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.229 [2024-10-07 11:31:38.191640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.229 [2024-10-07 11:31:38.201398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.229 [2024-10-07 11:31:38.201518] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.229 [2024-10-07 11:31:38.201549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.229 [2024-10-07 11:31:38.201567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.230 [2024-10-07 11:31:38.201599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.230 [2024-10-07 11:31:38.201630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.230 [2024-10-07 11:31:38.201648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.230 [2024-10-07 11:31:38.201662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.230 [2024-10-07 11:31:38.201692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.230 [2024-10-07 11:31:38.211686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.230 [2024-10-07 11:31:38.211805] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.230 [2024-10-07 11:31:38.211836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.230 [2024-10-07 11:31:38.211853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.230 [2024-10-07 11:31:38.211885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.230 [2024-10-07 11:31:38.211917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.230 [2024-10-07 11:31:38.211935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.230 [2024-10-07 11:31:38.211949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.230 [2024-10-07 11:31:38.211978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.230 [2024-10-07 11:31:38.221783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.230 [2024-10-07 11:31:38.221911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.230 [2024-10-07 11:31:38.221943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.230 [2024-10-07 11:31:38.221960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.230 [2024-10-07 11:31:38.222155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.230 [2024-10-07 11:31:38.222311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.230 [2024-10-07 11:31:38.222376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.230 [2024-10-07 11:31:38.222396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.230 [2024-10-07 11:31:38.222519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.230 [2024-10-07 11:31:38.233067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.230 [2024-10-07 11:31:38.233225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.230 [2024-10-07 11:31:38.233260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.230 [2024-10-07 11:31:38.233279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.230 [2024-10-07 11:31:38.233313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.230 [2024-10-07 11:31:38.233363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.230 [2024-10-07 11:31:38.233382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.230 [2024-10-07 11:31:38.233398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.230 [2024-10-07 11:31:38.233428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.230 [2024-10-07 11:31:38.243227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.230 [2024-10-07 11:31:38.243428] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.230 [2024-10-07 11:31:38.243478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.230 [2024-10-07 11:31:38.243510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.230 [2024-10-07 11:31:38.245213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.230 [2024-10-07 11:31:38.245612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.230 [2024-10-07 11:31:38.245675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.230 [2024-10-07 11:31:38.245708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.230 [2024-10-07 11:31:38.246684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.230 [2024-10-07 11:31:38.253772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.230 [2024-10-07 11:31:38.253943] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.230 [2024-10-07 11:31:38.253988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.230 [2024-10-07 11:31:38.254015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.230 [2024-10-07 11:31:38.254062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.230 [2024-10-07 11:31:38.255494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.230 [2024-10-07 11:31:38.255549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.230 [2024-10-07 11:31:38.255576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.230 [2024-10-07 11:31:38.255875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.230 [2024-10-07 11:31:38.264040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.230 [2024-10-07 11:31:38.264298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.230 [2024-10-07 11:31:38.264351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.230 [2024-10-07 11:31:38.264372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.230 [2024-10-07 11:31:38.264496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.230 [2024-10-07 11:31:38.264562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.230 [2024-10-07 11:31:38.264585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.230 [2024-10-07 11:31:38.264600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.230 [2024-10-07 11:31:38.264633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.230 [2024-10-07 11:31:38.274938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.230 [2024-10-07 11:31:38.275061] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.230 [2024-10-07 11:31:38.275093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.230 [2024-10-07 11:31:38.275111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.230 [2024-10-07 11:31:38.275143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.230 [2024-10-07 11:31:38.275175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.230 [2024-10-07 11:31:38.275194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.230 [2024-10-07 11:31:38.275208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.230 [2024-10-07 11:31:38.275238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.230 [2024-10-07 11:31:38.285035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.230 [2024-10-07 11:31:38.285153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.230 [2024-10-07 11:31:38.285185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.230 [2024-10-07 11:31:38.285202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.230 [2024-10-07 11:31:38.285234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.230 [2024-10-07 11:31:38.285266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.230 [2024-10-07 11:31:38.285284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.230 [2024-10-07 11:31:38.285298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.230 [2024-10-07 11:31:38.285344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.230 [2024-10-07 11:31:38.295572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.230 [2024-10-07 11:31:38.295700] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.230 [2024-10-07 11:31:38.295732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.230 [2024-10-07 11:31:38.295750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.230 [2024-10-07 11:31:38.295812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.230 [2024-10-07 11:31:38.295848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.230 [2024-10-07 11:31:38.295867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.230 [2024-10-07 11:31:38.295881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.230 [2024-10-07 11:31:38.295912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.230 [2024-10-07 11:31:38.305667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.230 [2024-10-07 11:31:38.305791] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.230 [2024-10-07 11:31:38.305823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.230 [2024-10-07 11:31:38.305842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.231 [2024-10-07 11:31:38.306032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.231 [2024-10-07 11:31:38.306186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.231 [2024-10-07 11:31:38.306220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.231 [2024-10-07 11:31:38.306236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.231 [2024-10-07 11:31:38.306385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.231 [2024-10-07 11:31:38.316883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.231 [2024-10-07 11:31:38.317001] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.231 [2024-10-07 11:31:38.317033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.231 [2024-10-07 11:31:38.317050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.231 [2024-10-07 11:31:38.317082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.231 [2024-10-07 11:31:38.317114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.231 [2024-10-07 11:31:38.317132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.231 [2024-10-07 11:31:38.317146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.231 [2024-10-07 11:31:38.317176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.231 [2024-10-07 11:31:38.326975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.231 [2024-10-07 11:31:38.327092] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.231 [2024-10-07 11:31:38.327123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.231 [2024-10-07 11:31:38.327140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.231 [2024-10-07 11:31:38.328339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.231 [2024-10-07 11:31:38.328583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.231 [2024-10-07 11:31:38.328620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.231 [2024-10-07 11:31:38.328652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.231 [2024-10-07 11:31:38.329479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.231 [2024-10-07 11:31:38.337069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.231 [2024-10-07 11:31:38.337187] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.231 [2024-10-07 11:31:38.337223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.231 [2024-10-07 11:31:38.337240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.231 [2024-10-07 11:31:38.337272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.231 [2024-10-07 11:31:38.337304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.231 [2024-10-07 11:31:38.337339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.231 [2024-10-07 11:31:38.337355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.231 [2024-10-07 11:31:38.337386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.231 [2024-10-07 11:31:38.347310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.231 [2024-10-07 11:31:38.347540] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.231 [2024-10-07 11:31:38.347575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.231 [2024-10-07 11:31:38.347593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.231 [2024-10-07 11:31:38.347713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.231 [2024-10-07 11:31:38.347776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.231 [2024-10-07 11:31:38.347799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.231 [2024-10-07 11:31:38.347814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.231 [2024-10-07 11:31:38.347856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.231 [2024-10-07 11:31:38.358139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.231 [2024-10-07 11:31:38.358258] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.231 [2024-10-07 11:31:38.358300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.231 [2024-10-07 11:31:38.358334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.231 [2024-10-07 11:31:38.358370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.231 [2024-10-07 11:31:38.358406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.231 [2024-10-07 11:31:38.358424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.231 [2024-10-07 11:31:38.358438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.231 [2024-10-07 11:31:38.358468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.231 [2024-10-07 11:31:38.368233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.231 [2024-10-07 11:31:38.368368] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.231 [2024-10-07 11:31:38.368416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.231 [2024-10-07 11:31:38.368436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.231 [2024-10-07 11:31:38.369640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.231 [2024-10-07 11:31:38.369900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.231 [2024-10-07 11:31:38.369934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.231 [2024-10-07 11:31:38.369956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.231 [2024-10-07 11:31:38.370793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.231 [2024-10-07 11:31:38.378439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.231 [2024-10-07 11:31:38.378556] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.231 [2024-10-07 11:31:38.378588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.231 [2024-10-07 11:31:38.378605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.231 [2024-10-07 11:31:38.378636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.231 [2024-10-07 11:31:38.378669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.231 [2024-10-07 11:31:38.378687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.231 [2024-10-07 11:31:38.378712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.231 [2024-10-07 11:31:38.378743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.231 [2024-10-07 11:31:38.388742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.231 [2024-10-07 11:31:38.388857] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.231 [2024-10-07 11:31:38.388889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.231 [2024-10-07 11:31:38.388906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.231 [2024-10-07 11:31:38.388944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.231 [2024-10-07 11:31:38.388976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.231 [2024-10-07 11:31:38.388994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.231 [2024-10-07 11:31:38.389008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.231 [2024-10-07 11:31:38.389037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.231 [2024-10-07 11:31:38.400409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.231 [2024-10-07 11:31:38.400525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.231 [2024-10-07 11:31:38.400557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.231 [2024-10-07 11:31:38.400575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.231 [2024-10-07 11:31:38.400606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.231 [2024-10-07 11:31:38.400657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.231 [2024-10-07 11:31:38.400677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.231 [2024-10-07 11:31:38.400691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.231 [2024-10-07 11:31:38.400722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.231 [2024-10-07 11:31:38.410500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.231 [2024-10-07 11:31:38.410622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.231 [2024-10-07 11:31:38.410663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.231 [2024-10-07 11:31:38.410680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.231 [2024-10-07 11:31:38.410712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.231 [2024-10-07 11:31:38.410744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.231 [2024-10-07 11:31:38.410762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.231 [2024-10-07 11:31:38.410776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.231 [2024-10-07 11:31:38.410805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.231 [2024-10-07 11:31:38.420814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.231 [2024-10-07 11:31:38.420931] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.231 [2024-10-07 11:31:38.420962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.232 [2024-10-07 11:31:38.420980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.232 [2024-10-07 11:31:38.421011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.232 [2024-10-07 11:31:38.421043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.232 [2024-10-07 11:31:38.421061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.232 [2024-10-07 11:31:38.421075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.232 [2024-10-07 11:31:38.421104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.232 [2024-10-07 11:31:38.431125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.232 [2024-10-07 11:31:38.431243] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.232 [2024-10-07 11:31:38.431275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.232 [2024-10-07 11:31:38.431293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.232 [2024-10-07 11:31:38.431339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.232 [2024-10-07 11:31:38.431374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.232 [2024-10-07 11:31:38.431392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.232 [2024-10-07 11:31:38.431406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.232 [2024-10-07 11:31:38.431455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.232 [2024-10-07 11:31:38.442796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.232 [2024-10-07 11:31:38.442915] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.232 [2024-10-07 11:31:38.442947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.232 [2024-10-07 11:31:38.442965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.232 [2024-10-07 11:31:38.442996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.232 [2024-10-07 11:31:38.443028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.232 [2024-10-07 11:31:38.443047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.232 [2024-10-07 11:31:38.443061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.232 [2024-10-07 11:31:38.443091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.232 [2024-10-07 11:31:38.452889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.232 [2024-10-07 11:31:38.453006] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.232 [2024-10-07 11:31:38.453039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.232 [2024-10-07 11:31:38.453057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.232 [2024-10-07 11:31:38.453088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.232 [2024-10-07 11:31:38.454276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.232 [2024-10-07 11:31:38.454341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.232 [2024-10-07 11:31:38.454361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.232 [2024-10-07 11:31:38.454595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.232 [2024-10-07 11:31:38.463070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.232 [2024-10-07 11:31:38.463186] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.232 [2024-10-07 11:31:38.463218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.232 [2024-10-07 11:31:38.463235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.232 [2024-10-07 11:31:38.463267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.232 [2024-10-07 11:31:38.463299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.232 [2024-10-07 11:31:38.463330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.232 [2024-10-07 11:31:38.463348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.232 [2024-10-07 11:31:38.463380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.232 [2024-10-07 11:31:38.473505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.232 [2024-10-07 11:31:38.473645] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.232 [2024-10-07 11:31:38.473678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.232 [2024-10-07 11:31:38.473711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.232 [2024-10-07 11:31:38.473746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.232 [2024-10-07 11:31:38.473782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.232 [2024-10-07 11:31:38.473799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.232 [2024-10-07 11:31:38.473813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.232 [2024-10-07 11:31:38.473844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.232 [2024-10-07 11:31:38.484067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.232 [2024-10-07 11:31:38.484187] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.232 [2024-10-07 11:31:38.484218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.232 [2024-10-07 11:31:38.484235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.232 [2024-10-07 11:31:38.484267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.232 [2024-10-07 11:31:38.484299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.232 [2024-10-07 11:31:38.484334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.232 [2024-10-07 11:31:38.484352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.232 [2024-10-07 11:31:38.484383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.232 [2024-10-07 11:31:38.494163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.232 [2024-10-07 11:31:38.494278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.232 [2024-10-07 11:31:38.494334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.232 [2024-10-07 11:31:38.494354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.232 [2024-10-07 11:31:38.494399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.232 [2024-10-07 11:31:38.495595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.232 [2024-10-07 11:31:38.495633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.232 [2024-10-07 11:31:38.495651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.232 [2024-10-07 11:31:38.495862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.232 7954.50 IOPS, 31.07 MiB/s [2024-10-07T11:31:52.755Z] [2024-10-07 11:31:38.504336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.232 [2024-10-07 11:31:38.504453] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.232 [2024-10-07 11:31:38.504485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.232 [2024-10-07 11:31:38.504503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.232 [2024-10-07 11:31:38.504534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.232 [2024-10-07 11:31:38.504566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.232 [2024-10-07 11:31:38.504600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.232 [2024-10-07 11:31:38.504615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.232 [2024-10-07 11:31:38.504647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.232 [2024-10-07 11:31:38.514425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.232 [2024-10-07 11:31:38.514695] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.232 [2024-10-07 11:31:38.514743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.232 [2024-10-07 11:31:38.514762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.232 [2024-10-07 11:31:38.514896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.232 [2024-10-07 11:31:38.515020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.232 [2024-10-07 11:31:38.515051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.232 [2024-10-07 11:31:38.515068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.232 [2024-10-07 11:31:38.515123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.232 [2024-10-07 11:31:38.525361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.232 [2024-10-07 11:31:38.525479] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.232 [2024-10-07 11:31:38.525510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.232 [2024-10-07 11:31:38.525528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.232 [2024-10-07 11:31:38.525559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.232 [2024-10-07 11:31:38.525593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.232 [2024-10-07 11:31:38.525611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.232 [2024-10-07 11:31:38.525625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.232 [2024-10-07 11:31:38.525655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.232 [2024-10-07 11:31:38.535455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.233 [2024-10-07 11:31:38.535570] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.233 [2024-10-07 11:31:38.535602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.233 [2024-10-07 11:31:38.535620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.233 [2024-10-07 11:31:38.536808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.233 [2024-10-07 11:31:38.537057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.233 [2024-10-07 11:31:38.537094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.233 [2024-10-07 11:31:38.537111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.233 [2024-10-07 11:31:38.537929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.233 [2024-10-07 11:31:38.545551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.233 [2024-10-07 11:31:38.545665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.233 [2024-10-07 11:31:38.545697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.233 [2024-10-07 11:31:38.545714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.233 [2024-10-07 11:31:38.545746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.233 [2024-10-07 11:31:38.545777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.233 [2024-10-07 11:31:38.545796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.233 [2024-10-07 11:31:38.545810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.233 [2024-10-07 11:31:38.545840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.233 [2024-10-07 11:31:38.555963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.233 [2024-10-07 11:31:38.556101] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.233 [2024-10-07 11:31:38.556133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.233 [2024-10-07 11:31:38.556151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.233 [2024-10-07 11:31:38.556182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.233 [2024-10-07 11:31:38.556214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.233 [2024-10-07 11:31:38.556232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.233 [2024-10-07 11:31:38.556246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.233 [2024-10-07 11:31:38.556275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.233 [2024-10-07 11:31:38.566488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.233 [2024-10-07 11:31:38.566603] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.233 [2024-10-07 11:31:38.566634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.233 [2024-10-07 11:31:38.566651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.233 [2024-10-07 11:31:38.566683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.233 [2024-10-07 11:31:38.566714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.233 [2024-10-07 11:31:38.566733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.233 [2024-10-07 11:31:38.566747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.233 [2024-10-07 11:31:38.566777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.233 [2024-10-07 11:31:38.576578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.233 [2024-10-07 11:31:38.576693] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.233 [2024-10-07 11:31:38.576724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.233 [2024-10-07 11:31:38.576742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.233 [2024-10-07 11:31:38.577947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.233 [2024-10-07 11:31:38.578183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.233 [2024-10-07 11:31:38.578227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.233 [2024-10-07 11:31:38.578244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.233 [2024-10-07 11:31:38.579074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.233 [2024-10-07 11:31:38.586681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.233 [2024-10-07 11:31:38.586795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.233 [2024-10-07 11:31:38.586826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.233 [2024-10-07 11:31:38.586847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.233 [2024-10-07 11:31:38.586878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.233 [2024-10-07 11:31:38.586910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.233 [2024-10-07 11:31:38.586929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.233 [2024-10-07 11:31:38.586943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.233 [2024-10-07 11:31:38.586972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.233 [2024-10-07 11:31:38.597015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.233 [2024-10-07 11:31:38.597160] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.233 [2024-10-07 11:31:38.597193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.233 [2024-10-07 11:31:38.597211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.233 [2024-10-07 11:31:38.597243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.233 [2024-10-07 11:31:38.597288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.233 [2024-10-07 11:31:38.597305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.233 [2024-10-07 11:31:38.597335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.233 [2024-10-07 11:31:38.597369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.233 [2024-10-07 11:31:38.607539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.233 [2024-10-07 11:31:38.607656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.233 [2024-10-07 11:31:38.607687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.233 [2024-10-07 11:31:38.607705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.233 [2024-10-07 11:31:38.607738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.233 [2024-10-07 11:31:38.607770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.233 [2024-10-07 11:31:38.607788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.233 [2024-10-07 11:31:38.607817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.233 [2024-10-07 11:31:38.607849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.233 [2024-10-07 11:31:38.617632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.233 [2024-10-07 11:31:38.617752] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.233 [2024-10-07 11:31:38.617783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.233 [2024-10-07 11:31:38.617800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.233 [2024-10-07 11:31:38.619014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.233 [2024-10-07 11:31:38.619251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.233 [2024-10-07 11:31:38.619286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.233 [2024-10-07 11:31:38.619304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.233 [2024-10-07 11:31:38.620127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.233 [2024-10-07 11:31:38.627727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.233 [2024-10-07 11:31:38.627842] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.233 [2024-10-07 11:31:38.627873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.233 [2024-10-07 11:31:38.627890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.233 [2024-10-07 11:31:38.627922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.233 [2024-10-07 11:31:38.627954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.233 [2024-10-07 11:31:38.627972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.233 [2024-10-07 11:31:38.627986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.233 [2024-10-07 11:31:38.628016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.233 [2024-10-07 11:31:38.638106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.233 [2024-10-07 11:31:38.638246] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.233 [2024-10-07 11:31:38.638279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.233 [2024-10-07 11:31:38.638311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.233 [2024-10-07 11:31:38.638360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.233 [2024-10-07 11:31:38.638392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.233 [2024-10-07 11:31:38.638410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.233 [2024-10-07 11:31:38.638424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.233 [2024-10-07 11:31:38.638455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.234 [2024-10-07 11:31:38.648636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.234 [2024-10-07 11:31:38.648767] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.234 [2024-10-07 11:31:38.648798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.234 [2024-10-07 11:31:38.648815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.234 [2024-10-07 11:31:38.648849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.234 [2024-10-07 11:31:38.648880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.234 [2024-10-07 11:31:38.648898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.234 [2024-10-07 11:31:38.648912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.234 [2024-10-07 11:31:38.648942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.234 [2024-10-07 11:31:38.658739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.234 [2024-10-07 11:31:38.658855] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.234 [2024-10-07 11:31:38.658886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.234 [2024-10-07 11:31:38.658904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.234 [2024-10-07 11:31:38.660091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.234 [2024-10-07 11:31:38.660349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.234 [2024-10-07 11:31:38.660381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.234 [2024-10-07 11:31:38.660398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.234 [2024-10-07 11:31:38.661201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.234 [2024-10-07 11:31:38.668830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.234 [2024-10-07 11:31:38.668950] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.234 [2024-10-07 11:31:38.668982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.234 [2024-10-07 11:31:38.669000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.234 [2024-10-07 11:31:38.669032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.234 [2024-10-07 11:31:38.669064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.234 [2024-10-07 11:31:38.669082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.234 [2024-10-07 11:31:38.669096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.234 [2024-10-07 11:31:38.669126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.234 [2024-10-07 11:31:38.679070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.234 [2024-10-07 11:31:38.679285] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.234 [2024-10-07 11:31:38.679330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.234 [2024-10-07 11:31:38.679351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.234 [2024-10-07 11:31:38.679472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.234 [2024-10-07 11:31:38.679553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.234 [2024-10-07 11:31:38.679578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.234 [2024-10-07 11:31:38.679592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.234 [2024-10-07 11:31:38.679623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.234 [2024-10-07 11:31:38.689872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.234 [2024-10-07 11:31:38.689990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.234 [2024-10-07 11:31:38.690021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.234 [2024-10-07 11:31:38.690038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.234 [2024-10-07 11:31:38.690070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.234 [2024-10-07 11:31:38.690102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.234 [2024-10-07 11:31:38.690120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.234 [2024-10-07 11:31:38.690134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.234 [2024-10-07 11:31:38.690164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.234 [2024-10-07 11:31:38.699965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.234 [2024-10-07 11:31:38.700085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.234 [2024-10-07 11:31:38.700117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.234 [2024-10-07 11:31:38.700134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.234 [2024-10-07 11:31:38.701343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.234 [2024-10-07 11:31:38.701575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.234 [2024-10-07 11:31:38.701623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.234 [2024-10-07 11:31:38.701640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.234 [2024-10-07 11:31:38.702471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.234 [2024-10-07 11:31:38.710061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.234 [2024-10-07 11:31:38.710179] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.234 [2024-10-07 11:31:38.710211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.234 [2024-10-07 11:31:38.710229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.234 [2024-10-07 11:31:38.710266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.234 [2024-10-07 11:31:38.710313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.234 [2024-10-07 11:31:38.710348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.234 [2024-10-07 11:31:38.710363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.234 [2024-10-07 11:31:38.710411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.234 [2024-10-07 11:31:38.720384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.234 [2024-10-07 11:31:38.720524] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.234 [2024-10-07 11:31:38.720557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.234 [2024-10-07 11:31:38.720575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.234 [2024-10-07 11:31:38.720607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.234 [2024-10-07 11:31:38.720639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.234 [2024-10-07 11:31:38.720657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.234 [2024-10-07 11:31:38.720670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.234 [2024-10-07 11:31:38.720701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.234 [2024-10-07 11:31:38.730887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.234 [2024-10-07 11:31:38.731009] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.234 [2024-10-07 11:31:38.731041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.234 [2024-10-07 11:31:38.731058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.234 [2024-10-07 11:31:38.731089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.234 [2024-10-07 11:31:38.731121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.234 [2024-10-07 11:31:38.731139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.234 [2024-10-07 11:31:38.731154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.234 [2024-10-07 11:31:38.731183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.234 [2024-10-07 11:31:38.740981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.234 [2024-10-07 11:31:38.742254] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.234 [2024-10-07 11:31:38.742307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.234 [2024-10-07 11:31:38.742341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.234 [2024-10-07 11:31:38.742566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.234 [2024-10-07 11:31:38.743414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.235 [2024-10-07 11:31:38.743449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.235 [2024-10-07 11:31:38.743467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.235 [2024-10-07 11:31:38.744657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.235 [2024-10-07 11:31:38.751071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.235 [2024-10-07 11:31:38.751184] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.235 [2024-10-07 11:31:38.751216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.235 [2024-10-07 11:31:38.751247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.235 [2024-10-07 11:31:38.751281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.235 [2024-10-07 11:31:38.751314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.235 [2024-10-07 11:31:38.751348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.235 [2024-10-07 11:31:38.751362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.235 [2024-10-07 11:31:38.751394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.235 [2024-10-07 11:31:38.761338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.235 [2024-10-07 11:31:38.761454] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.235 [2024-10-07 11:31:38.761485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.235 [2024-10-07 11:31:38.761503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.235 [2024-10-07 11:31:38.761534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.235 [2024-10-07 11:31:38.761566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.235 [2024-10-07 11:31:38.761594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.235 [2024-10-07 11:31:38.761608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.235 [2024-10-07 11:31:38.761638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.235 [2024-10-07 11:31:38.771792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.235 [2024-10-07 11:31:38.771912] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.235 [2024-10-07 11:31:38.771943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.235 [2024-10-07 11:31:38.771960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.235 [2024-10-07 11:31:38.771992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.235 [2024-10-07 11:31:38.772025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.235 [2024-10-07 11:31:38.772043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.235 [2024-10-07 11:31:38.772065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.235 [2024-10-07 11:31:38.772096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.235 [2024-10-07 11:31:38.781885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.235 [2024-10-07 11:31:38.783163] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.235 [2024-10-07 11:31:38.783210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.235 [2024-10-07 11:31:38.783230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.235 [2024-10-07 11:31:38.783457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.235 [2024-10-07 11:31:38.784268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.235 [2024-10-07 11:31:38.784333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.235 [2024-10-07 11:31:38.784353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.235 [2024-10-07 11:31:38.785563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.235 [2024-10-07 11:31:38.791975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.235 [2024-10-07 11:31:38.792091] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.235 [2024-10-07 11:31:38.792123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.235 [2024-10-07 11:31:38.792140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.235 [2024-10-07 11:31:38.792183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.235 [2024-10-07 11:31:38.792215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.235 [2024-10-07 11:31:38.792233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.235 [2024-10-07 11:31:38.792247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.235 [2024-10-07 11:31:38.792277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.235 [2024-10-07 11:31:38.802250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.235 [2024-10-07 11:31:38.802418] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.235 [2024-10-07 11:31:38.802452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.235 [2024-10-07 11:31:38.802470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.235 [2024-10-07 11:31:38.802502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.235 [2024-10-07 11:31:38.802535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.235 [2024-10-07 11:31:38.802552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.235 [2024-10-07 11:31:38.802566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.235 [2024-10-07 11:31:38.802597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.235 [2024-10-07 11:31:38.812791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.235 [2024-10-07 11:31:38.812911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.235 [2024-10-07 11:31:38.812943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.235 [2024-10-07 11:31:38.812961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.235 [2024-10-07 11:31:38.812993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.235 [2024-10-07 11:31:38.813024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.235 [2024-10-07 11:31:38.813043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.235 [2024-10-07 11:31:38.813057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.235 [2024-10-07 11:31:38.813087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.235 [2024-10-07 11:31:38.822885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.235 [2024-10-07 11:31:38.823012] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.235 [2024-10-07 11:31:38.823043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.235 [2024-10-07 11:31:38.823061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.235 [2024-10-07 11:31:38.824269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.235 [2024-10-07 11:31:38.824515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.235 [2024-10-07 11:31:38.824551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.235 [2024-10-07 11:31:38.824568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.235 [2024-10-07 11:31:38.825386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.235 [2024-10-07 11:31:38.832990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.235 [2024-10-07 11:31:38.833115] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.235 [2024-10-07 11:31:38.833147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.235 [2024-10-07 11:31:38.833164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.235 [2024-10-07 11:31:38.833196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.235 [2024-10-07 11:31:38.833227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.235 [2024-10-07 11:31:38.833246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.235 [2024-10-07 11:31:38.833260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.235 [2024-10-07 11:31:38.833289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.235 [2024-10-07 11:31:38.843092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.235 [2024-10-07 11:31:38.843386] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.235 [2024-10-07 11:31:38.843430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.235 [2024-10-07 11:31:38.843449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.235 [2024-10-07 11:31:38.843581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.235 [2024-10-07 11:31:38.843717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.235 [2024-10-07 11:31:38.843751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.235 [2024-10-07 11:31:38.843768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.235 [2024-10-07 11:31:38.843824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.235 [2024-10-07 11:31:38.854066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.235 [2024-10-07 11:31:38.854185] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.235 [2024-10-07 11:31:38.854218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.235 [2024-10-07 11:31:38.854235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.235 [2024-10-07 11:31:38.854297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.236 [2024-10-07 11:31:38.854349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.236 [2024-10-07 11:31:38.854369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.236 [2024-10-07 11:31:38.854385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.236 [2024-10-07 11:31:38.854415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.236 [2024-10-07 11:31:38.864156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.236 [2024-10-07 11:31:38.864273] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.236 [2024-10-07 11:31:38.864305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.236 [2024-10-07 11:31:38.864338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.236 [2024-10-07 11:31:38.864372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.236 [2024-10-07 11:31:38.864403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.236 [2024-10-07 11:31:38.864421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.236 [2024-10-07 11:31:38.864435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.236 [2024-10-07 11:31:38.865619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.236 [2024-10-07 11:31:38.874402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.236 [2024-10-07 11:31:38.874517] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.236 [2024-10-07 11:31:38.874548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.236 [2024-10-07 11:31:38.874566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.236 [2024-10-07 11:31:38.874598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.236 [2024-10-07 11:31:38.874630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.236 [2024-10-07 11:31:38.874648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.236 [2024-10-07 11:31:38.874662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.236 [2024-10-07 11:31:38.874692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.236 [2024-10-07 11:31:38.884493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.236 [2024-10-07 11:31:38.884761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.236 [2024-10-07 11:31:38.884805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.236 [2024-10-07 11:31:38.884824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.236 [2024-10-07 11:31:38.884957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.236 [2024-10-07 11:31:38.885082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.236 [2024-10-07 11:31:38.885117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.236 [2024-10-07 11:31:38.885149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.236 [2024-10-07 11:31:38.885208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.236 [2024-10-07 11:31:38.895451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.236 [2024-10-07 11:31:38.895568] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.236 [2024-10-07 11:31:38.895600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.236 [2024-10-07 11:31:38.895617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.236 [2024-10-07 11:31:38.895649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.236 [2024-10-07 11:31:38.895681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.236 [2024-10-07 11:31:38.895699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.236 [2024-10-07 11:31:38.895718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.236 [2024-10-07 11:31:38.895748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.236 [2024-10-07 11:31:38.905716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.236 [2024-10-07 11:31:38.905842] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.236 [2024-10-07 11:31:38.905874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.236 [2024-10-07 11:31:38.905892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.236 [2024-10-07 11:31:38.905924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.236 [2024-10-07 11:31:38.905955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.236 [2024-10-07 11:31:38.905973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.236 [2024-10-07 11:31:38.905988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.236 [2024-10-07 11:31:38.906018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.236 [2024-10-07 11:31:38.916277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.236 [2024-10-07 11:31:38.916417] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.236 [2024-10-07 11:31:38.916451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.236 [2024-10-07 11:31:38.916468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.236 [2024-10-07 11:31:38.916502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.236 [2024-10-07 11:31:38.916548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.236 [2024-10-07 11:31:38.916569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.236 [2024-10-07 11:31:38.916585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.236 [2024-10-07 11:31:38.916615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.236 [2024-10-07 11:31:38.926383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.236 [2024-10-07 11:31:38.926518] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.236 [2024-10-07 11:31:38.926550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.236 [2024-10-07 11:31:38.926568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.236 [2024-10-07 11:31:38.926600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.236 [2024-10-07 11:31:38.926632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.236 [2024-10-07 11:31:38.926651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.236 [2024-10-07 11:31:38.926665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.236 [2024-10-07 11:31:38.926695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.236 [2024-10-07 11:31:38.937766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.236 [2024-10-07 11:31:38.937887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.236 [2024-10-07 11:31:38.937919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.236 [2024-10-07 11:31:38.937936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.236 [2024-10-07 11:31:38.937969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.236 [2024-10-07 11:31:38.938001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.236 [2024-10-07 11:31:38.938019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.236 [2024-10-07 11:31:38.938032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.236 [2024-10-07 11:31:38.938063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.236 [2024-10-07 11:31:38.947878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.236 [2024-10-07 11:31:38.947995] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.236 [2024-10-07 11:31:38.948026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.236 [2024-10-07 11:31:38.948043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.236 [2024-10-07 11:31:38.948075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.236 [2024-10-07 11:31:38.948107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.236 [2024-10-07 11:31:38.948124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.236 [2024-10-07 11:31:38.948138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.236 [2024-10-07 11:31:38.948168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.236 [2024-10-07 11:31:38.958712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.236 [2024-10-07 11:31:38.958834] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.236 [2024-10-07 11:31:38.958867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.236 [2024-10-07 11:31:38.958884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.236 [2024-10-07 11:31:38.958916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.236 [2024-10-07 11:31:38.958963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.236 [2024-10-07 11:31:38.958983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.236 [2024-10-07 11:31:38.958997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.236 [2024-10-07 11:31:38.959027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.236 [2024-10-07 11:31:38.968801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.236 [2024-10-07 11:31:38.968916] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.236 [2024-10-07 11:31:38.968948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.236 [2024-10-07 11:31:38.968965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.237 [2024-10-07 11:31:38.969003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.237 [2024-10-07 11:31:38.969035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.237 [2024-10-07 11:31:38.969053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.237 [2024-10-07 11:31:38.969066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.237 [2024-10-07 11:31:38.969096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.237 [2024-10-07 11:31:38.980151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.237 [2024-10-07 11:31:38.980266] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.237 [2024-10-07 11:31:38.980297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.237 [2024-10-07 11:31:38.980327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.237 [2024-10-07 11:31:38.980363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.237 [2024-10-07 11:31:38.980396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.237 [2024-10-07 11:31:38.980413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.237 [2024-10-07 11:31:38.980428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.237 [2024-10-07 11:31:38.980457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.237 [2024-10-07 11:31:38.990239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.237 [2024-10-07 11:31:38.990381] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.237 [2024-10-07 11:31:38.990413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.237 [2024-10-07 11:31:38.990431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.237 [2024-10-07 11:31:38.990463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.237 [2024-10-07 11:31:38.990495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.237 [2024-10-07 11:31:38.990512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.237 [2024-10-07 11:31:38.990526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.237 [2024-10-07 11:31:38.990573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.237 [2024-10-07 11:31:39.000908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.237 [2024-10-07 11:31:39.001033] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.237 [2024-10-07 11:31:39.001065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.237 [2024-10-07 11:31:39.001082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.237 [2024-10-07 11:31:39.001115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.237 [2024-10-07 11:31:39.001147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.237 [2024-10-07 11:31:39.001165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.237 [2024-10-07 11:31:39.001179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.237 [2024-10-07 11:31:39.001209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.237 [2024-10-07 11:31:39.011000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.237 [2024-10-07 11:31:39.011117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.237 [2024-10-07 11:31:39.011149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.237 [2024-10-07 11:31:39.011166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.237 [2024-10-07 11:31:39.011197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.237 [2024-10-07 11:31:39.011229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.237 [2024-10-07 11:31:39.011247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.237 [2024-10-07 11:31:39.011261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.237 [2024-10-07 11:31:39.011291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.237 [2024-10-07 11:31:39.022367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.237 [2024-10-07 11:31:39.022486] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.237 [2024-10-07 11:31:39.022518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.237 [2024-10-07 11:31:39.022536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.237 [2024-10-07 11:31:39.022568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.237 [2024-10-07 11:31:39.022600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.237 [2024-10-07 11:31:39.022618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.237 [2024-10-07 11:31:39.022633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.237 [2024-10-07 11:31:39.022662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.237 [2024-10-07 11:31:39.032462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.237 [2024-10-07 11:31:39.032576] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.237 [2024-10-07 11:31:39.032608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.237 [2024-10-07 11:31:39.032641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.237 [2024-10-07 11:31:39.032675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.237 [2024-10-07 11:31:39.032708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.237 [2024-10-07 11:31:39.032726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.237 [2024-10-07 11:31:39.032740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.237 [2024-10-07 11:31:39.032770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.237 [2024-10-07 11:31:39.043249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.237 [2024-10-07 11:31:39.043389] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.237 [2024-10-07 11:31:39.043422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.237 [2024-10-07 11:31:39.043439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.237 [2024-10-07 11:31:39.043471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.237 [2024-10-07 11:31:39.043504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.237 [2024-10-07 11:31:39.043522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.237 [2024-10-07 11:31:39.043536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.237 [2024-10-07 11:31:39.043567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.237 [2024-10-07 11:31:39.053353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.237 [2024-10-07 11:31:39.053469] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.237 [2024-10-07 11:31:39.053499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.237 [2024-10-07 11:31:39.053517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.237 [2024-10-07 11:31:39.053548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.237 [2024-10-07 11:31:39.053580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.237 [2024-10-07 11:31:39.053597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.237 [2024-10-07 11:31:39.053611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.237 [2024-10-07 11:31:39.053641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.237 [2024-10-07 11:31:39.064740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.237 [2024-10-07 11:31:39.064857] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.237 [2024-10-07 11:31:39.064888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.237 [2024-10-07 11:31:39.064916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.237 [2024-10-07 11:31:39.064947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.237 [2024-10-07 11:31:39.064980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.237 [2024-10-07 11:31:39.065016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.237 [2024-10-07 11:31:39.065031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.237 [2024-10-07 11:31:39.065063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.237 [2024-10-07 11:31:39.074828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.237 [2024-10-07 11:31:39.074943] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.237 [2024-10-07 11:31:39.074975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.237 [2024-10-07 11:31:39.074992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.237 [2024-10-07 11:31:39.075023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.237 [2024-10-07 11:31:39.075055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.237 [2024-10-07 11:31:39.075072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.237 [2024-10-07 11:31:39.075086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.237 [2024-10-07 11:31:39.076273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.237 [2024-10-07 11:31:39.085042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.237 [2024-10-07 11:31:39.085158] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.237 [2024-10-07 11:31:39.085190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.238 [2024-10-07 11:31:39.085207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.238 [2024-10-07 11:31:39.085238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.238 [2024-10-07 11:31:39.085270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.238 [2024-10-07 11:31:39.085288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.238 [2024-10-07 11:31:39.085302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.238 [2024-10-07 11:31:39.085347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.238 [2024-10-07 11:31:39.095131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.238 [2024-10-07 11:31:39.095248] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.238 [2024-10-07 11:31:39.095279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.238 [2024-10-07 11:31:39.095296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.238 [2024-10-07 11:31:39.095496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.238 [2024-10-07 11:31:39.095639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.238 [2024-10-07 11:31:39.095674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.238 [2024-10-07 11:31:39.095693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.238 [2024-10-07 11:31:39.095812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.238 [2024-10-07 11:31:39.106222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.238 [2024-10-07 11:31:39.106365] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.238 [2024-10-07 11:31:39.106398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.238 [2024-10-07 11:31:39.106415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.238 [2024-10-07 11:31:39.106448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.238 [2024-10-07 11:31:39.106480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.238 [2024-10-07 11:31:39.106498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.238 [2024-10-07 11:31:39.106512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.238 [2024-10-07 11:31:39.106543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.238 [2024-10-07 11:31:39.116330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.238 [2024-10-07 11:31:39.116448] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.238 [2024-10-07 11:31:39.116487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.238 [2024-10-07 11:31:39.116504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.238 [2024-10-07 11:31:39.116536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.238 [2024-10-07 11:31:39.117724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.238 [2024-10-07 11:31:39.117762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.238 [2024-10-07 11:31:39.117780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.238 [2024-10-07 11:31:39.117984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.238 [2024-10-07 11:31:39.126491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.238 [2024-10-07 11:31:39.126612] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.238 [2024-10-07 11:31:39.126644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.238 [2024-10-07 11:31:39.126662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.238 [2024-10-07 11:31:39.126694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.238 [2024-10-07 11:31:39.126726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.238 [2024-10-07 11:31:39.126744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.238 [2024-10-07 11:31:39.126758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.238 [2024-10-07 11:31:39.126788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.238 [2024-10-07 11:31:39.136853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.238 [2024-10-07 11:31:39.136992] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.238 [2024-10-07 11:31:39.137025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.238 [2024-10-07 11:31:39.137042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.238 [2024-10-07 11:31:39.137091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.238 [2024-10-07 11:31:39.137124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.238 [2024-10-07 11:31:39.137141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.238 [2024-10-07 11:31:39.137155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.238 [2024-10-07 11:31:39.137186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.238 [2024-10-07 11:31:39.147427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.238 [2024-10-07 11:31:39.147544] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.238 [2024-10-07 11:31:39.147575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.238 [2024-10-07 11:31:39.147592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.238 [2024-10-07 11:31:39.147624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.238 [2024-10-07 11:31:39.147655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.238 [2024-10-07 11:31:39.147673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.238 [2024-10-07 11:31:39.147687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.238 [2024-10-07 11:31:39.147717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.238 [2024-10-07 11:31:39.157516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.238 [2024-10-07 11:31:39.157631] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.238 [2024-10-07 11:31:39.157662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.238 [2024-10-07 11:31:39.157679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.238 [2024-10-07 11:31:39.158878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.238 [2024-10-07 11:31:39.159123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.238 [2024-10-07 11:31:39.159156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.238 [2024-10-07 11:31:39.159173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.238 [2024-10-07 11:31:39.159991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.238 [2024-10-07 11:31:39.167609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.238 [2024-10-07 11:31:39.167724] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.238 [2024-10-07 11:31:39.167756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.238 [2024-10-07 11:31:39.167773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.238 [2024-10-07 11:31:39.167805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.238 [2024-10-07 11:31:39.167837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.238 [2024-10-07 11:31:39.167855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.238 [2024-10-07 11:31:39.167883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.238 [2024-10-07 11:31:39.167916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.238 [2024-10-07 11:31:39.177964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.238 [2024-10-07 11:31:39.178108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.238 [2024-10-07 11:31:39.178140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.238 [2024-10-07 11:31:39.178158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.238 [2024-10-07 11:31:39.178189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.238 [2024-10-07 11:31:39.178221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.238 [2024-10-07 11:31:39.178238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.238 [2024-10-07 11:31:39.178252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.239 [2024-10-07 11:31:39.178282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.239 [2024-10-07 11:31:39.188573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.239 [2024-10-07 11:31:39.188685] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.239 [2024-10-07 11:31:39.188723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.239 [2024-10-07 11:31:39.188740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.239 [2024-10-07 11:31:39.188784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.239 [2024-10-07 11:31:39.188818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.239 [2024-10-07 11:31:39.188837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.239 [2024-10-07 11:31:39.188851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.239 [2024-10-07 11:31:39.188881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.239 [2024-10-07 11:31:39.198662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.239 [2024-10-07 11:31:39.198778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.239 [2024-10-07 11:31:39.198809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.239 [2024-10-07 11:31:39.198827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.239 [2024-10-07 11:31:39.198868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.239 [2024-10-07 11:31:39.198899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.239 [2024-10-07 11:31:39.198918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.239 [2024-10-07 11:31:39.198932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.239 [2024-10-07 11:31:39.200121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.239 [2024-10-07 11:31:39.208854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.239 [2024-10-07 11:31:39.208991] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.239 [2024-10-07 11:31:39.209023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.239 [2024-10-07 11:31:39.209040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.239 [2024-10-07 11:31:39.209072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.239 [2024-10-07 11:31:39.209104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.239 [2024-10-07 11:31:39.209121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.239 [2024-10-07 11:31:39.209135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.239 [2024-10-07 11:31:39.209166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.239 [2024-10-07 11:31:39.218968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.239 [2024-10-07 11:31:39.219243] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.239 [2024-10-07 11:31:39.219286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.239 [2024-10-07 11:31:39.219306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.239 [2024-10-07 11:31:39.219454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.239 [2024-10-07 11:31:39.219580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.239 [2024-10-07 11:31:39.219606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.239 [2024-10-07 11:31:39.219621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.239 [2024-10-07 11:31:39.219678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.239 [2024-10-07 11:31:39.229945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.239 [2024-10-07 11:31:39.230062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.239 [2024-10-07 11:31:39.230093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.239 [2024-10-07 11:31:39.230110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.239 [2024-10-07 11:31:39.230142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.239 [2024-10-07 11:31:39.230174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.239 [2024-10-07 11:31:39.230192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.239 [2024-10-07 11:31:39.230214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.239 [2024-10-07 11:31:39.230245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.239 [2024-10-07 11:31:39.240040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.239 [2024-10-07 11:31:39.240155] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.239 [2024-10-07 11:31:39.240186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.239 [2024-10-07 11:31:39.240204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.239 [2024-10-07 11:31:39.240236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.239 [2024-10-07 11:31:39.240284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.239 [2024-10-07 11:31:39.240312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.239 [2024-10-07 11:31:39.240344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.239 [2024-10-07 11:31:39.241538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.239 [2024-10-07 11:31:39.250327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.239 [2024-10-07 11:31:39.250446] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.239 [2024-10-07 11:31:39.250477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.239 [2024-10-07 11:31:39.250494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.239 [2024-10-07 11:31:39.250526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.239 [2024-10-07 11:31:39.250569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.239 [2024-10-07 11:31:39.250587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.239 [2024-10-07 11:31:39.250601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.239 [2024-10-07 11:31:39.250630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.239 [2024-10-07 11:31:39.260423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.239 [2024-10-07 11:31:39.260546] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.239 [2024-10-07 11:31:39.260577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.239 [2024-10-07 11:31:39.260595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.239 [2024-10-07 11:31:39.260783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.239 [2024-10-07 11:31:39.260925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.239 [2024-10-07 11:31:39.260950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.239 [2024-10-07 11:31:39.260965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.239 [2024-10-07 11:31:39.261082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.239 [2024-10-07 11:31:39.271699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.239 [2024-10-07 11:31:39.271857] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.239 [2024-10-07 11:31:39.271891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.239 [2024-10-07 11:31:39.271909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.239 [2024-10-07 11:31:39.271943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.239 [2024-10-07 11:31:39.271976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.239 [2024-10-07 11:31:39.271995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.239 [2024-10-07 11:31:39.272011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.239 [2024-10-07 11:31:39.272067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.239 [2024-10-07 11:31:39.281818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.239 [2024-10-07 11:31:39.281958] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.239 [2024-10-07 11:31:39.281991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.239 [2024-10-07 11:31:39.282010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.239 [2024-10-07 11:31:39.282043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.239 [2024-10-07 11:31:39.282089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.239 [2024-10-07 11:31:39.282110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.239 [2024-10-07 11:31:39.282126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.239 [2024-10-07 11:31:39.283355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.239 [2024-10-07 11:31:39.292177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.239 [2024-10-07 11:31:39.292343] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.239 [2024-10-07 11:31:39.292378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.239 [2024-10-07 11:31:39.292396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.239 [2024-10-07 11:31:39.292431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.239 [2024-10-07 11:31:39.292464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.239 [2024-10-07 11:31:39.292483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.240 [2024-10-07 11:31:39.292498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.240 [2024-10-07 11:31:39.292529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.240 [2024-10-07 11:31:39.302298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.240 [2024-10-07 11:31:39.302455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.240 [2024-10-07 11:31:39.302489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.240 [2024-10-07 11:31:39.302507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.240 [2024-10-07 11:31:39.302697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.240 [2024-10-07 11:31:39.302840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.240 [2024-10-07 11:31:39.302872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.240 [2024-10-07 11:31:39.302889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.240 [2024-10-07 11:31:39.303007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.240 [2024-10-07 11:31:39.313508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.240 [2024-10-07 11:31:39.313666] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.240 [2024-10-07 11:31:39.313700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.240 [2024-10-07 11:31:39.313745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.240 [2024-10-07 11:31:39.313783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.240 [2024-10-07 11:31:39.313816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.240 [2024-10-07 11:31:39.313834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.240 [2024-10-07 11:31:39.313850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.240 [2024-10-07 11:31:39.313881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.240 [2024-10-07 11:31:39.323633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.240 [2024-10-07 11:31:39.323794] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.240 [2024-10-07 11:31:39.323828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.240 [2024-10-07 11:31:39.323847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.240 [2024-10-07 11:31:39.323882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.240 [2024-10-07 11:31:39.323915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.240 [2024-10-07 11:31:39.323933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.240 [2024-10-07 11:31:39.323948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.240 [2024-10-07 11:31:39.325162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.240 [2024-10-07 11:31:39.334050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.240 [2024-10-07 11:31:39.334218] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.240 [2024-10-07 11:31:39.334252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.240 [2024-10-07 11:31:39.334271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.240 [2024-10-07 11:31:39.334336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.240 [2024-10-07 11:31:39.334375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.240 [2024-10-07 11:31:39.334394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.240 [2024-10-07 11:31:39.334410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.240 [2024-10-07 11:31:39.334441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.240 [2024-10-07 11:31:39.344167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.240 [2024-10-07 11:31:39.344313] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.240 [2024-10-07 11:31:39.344359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.240 [2024-10-07 11:31:39.344378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.240 [2024-10-07 11:31:39.344569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.240 [2024-10-07 11:31:39.344711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.240 [2024-10-07 11:31:39.344776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.240 [2024-10-07 11:31:39.344796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.240 [2024-10-07 11:31:39.344917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.240 [2024-10-07 11:31:39.355457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.240 [2024-10-07 11:31:39.355605] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.240 [2024-10-07 11:31:39.355638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.240 [2024-10-07 11:31:39.355663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.240 [2024-10-07 11:31:39.355696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.240 [2024-10-07 11:31:39.355729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.240 [2024-10-07 11:31:39.355747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.240 [2024-10-07 11:31:39.355762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.240 [2024-10-07 11:31:39.355794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.240 [2024-10-07 11:31:39.365568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.240 [2024-10-07 11:31:39.365722] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.240 [2024-10-07 11:31:39.365756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.240 [2024-10-07 11:31:39.365784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.240 [2024-10-07 11:31:39.365818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.240 [2024-10-07 11:31:39.365851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.240 [2024-10-07 11:31:39.365869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.240 [2024-10-07 11:31:39.365884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.240 [2024-10-07 11:31:39.365915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.240 [2024-10-07 11:31:39.376051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.240 [2024-10-07 11:31:39.376214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.240 [2024-10-07 11:31:39.376248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.240 [2024-10-07 11:31:39.376267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.240 [2024-10-07 11:31:39.376301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.240 [2024-10-07 11:31:39.376367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.240 [2024-10-07 11:31:39.376390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.240 [2024-10-07 11:31:39.376406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.240 [2024-10-07 11:31:39.376437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.240 [2024-10-07 11:31:39.386179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.240 [2024-10-07 11:31:39.386350] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.240 [2024-10-07 11:31:39.386385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.240 [2024-10-07 11:31:39.386404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.240 [2024-10-07 11:31:39.386595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.240 [2024-10-07 11:31:39.386737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.240 [2024-10-07 11:31:39.386763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.240 [2024-10-07 11:31:39.386778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.240 [2024-10-07 11:31:39.386895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.240 [2024-10-07 11:31:39.398822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.240 [2024-10-07 11:31:39.399673] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.240 [2024-10-07 11:31:39.399721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.240 [2024-10-07 11:31:39.399743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.240 [2024-10-07 11:31:39.399846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.240 [2024-10-07 11:31:39.399886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.240 [2024-10-07 11:31:39.399905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.240 [2024-10-07 11:31:39.399920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.240 [2024-10-07 11:31:39.399953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.240 [2024-10-07 11:31:39.410767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.240 [2024-10-07 11:31:39.412104] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.240 [2024-10-07 11:31:39.412151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.240 [2024-10-07 11:31:39.412172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.240 [2024-10-07 11:31:39.412313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.240 [2024-10-07 11:31:39.412380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.241 [2024-10-07 11:31:39.412401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.241 [2024-10-07 11:31:39.412428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.241 [2024-10-07 11:31:39.412460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.241 [2024-10-07 11:31:39.420894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.241 [2024-10-07 11:31:39.421986] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.241 [2024-10-07 11:31:39.422035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.241 [2024-10-07 11:31:39.422057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.241 [2024-10-07 11:31:39.422278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.241 [2024-10-07 11:31:39.422395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.241 [2024-10-07 11:31:39.422420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.241 [2024-10-07 11:31:39.422438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.241 [2024-10-07 11:31:39.422472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.241 [2024-10-07 11:31:39.431712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.241 [2024-10-07 11:31:39.431900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.241 [2024-10-07 11:31:39.431935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.241 [2024-10-07 11:31:39.431954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.241 [2024-10-07 11:31:39.431989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.241 [2024-10-07 11:31:39.432024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.241 [2024-10-07 11:31:39.432043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.241 [2024-10-07 11:31:39.432060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.241 [2024-10-07 11:31:39.432091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.241 [2024-10-07 11:31:39.442532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.241 [2024-10-07 11:31:39.442690] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.241 [2024-10-07 11:31:39.442724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.241 [2024-10-07 11:31:39.442742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.241 [2024-10-07 11:31:39.442777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.241 [2024-10-07 11:31:39.442822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.241 [2024-10-07 11:31:39.442843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.241 [2024-10-07 11:31:39.442859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.241 [2024-10-07 11:31:39.442890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.241 [2024-10-07 11:31:39.454595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.241 [2024-10-07 11:31:39.454775] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.241 [2024-10-07 11:31:39.454810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.241 [2024-10-07 11:31:39.454829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.241 [2024-10-07 11:31:39.454864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.241 [2024-10-07 11:31:39.454897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.241 [2024-10-07 11:31:39.454916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.241 [2024-10-07 11:31:39.454954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.241 [2024-10-07 11:31:39.454987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.241 [2024-10-07 11:31:39.464734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.241 [2024-10-07 11:31:39.464891] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.241 [2024-10-07 11:31:39.464925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.241 [2024-10-07 11:31:39.464943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.241 [2024-10-07 11:31:39.464976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.241 [2024-10-07 11:31:39.465009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.241 [2024-10-07 11:31:39.465027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.241 [2024-10-07 11:31:39.465043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.241 [2024-10-07 11:31:39.466247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.241 [2024-10-07 11:31:39.475097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.241 [2024-10-07 11:31:39.475275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.241 [2024-10-07 11:31:39.475312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.241 [2024-10-07 11:31:39.475346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.241 [2024-10-07 11:31:39.475384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.241 [2024-10-07 11:31:39.475433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.241 [2024-10-07 11:31:39.475455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.241 [2024-10-07 11:31:39.475471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.241 [2024-10-07 11:31:39.475502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.241 [2024-10-07 11:31:39.485239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.241 [2024-10-07 11:31:39.485436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.241 [2024-10-07 11:31:39.485472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.241 [2024-10-07 11:31:39.485491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.241 [2024-10-07 11:31:39.485690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.241 [2024-10-07 11:31:39.485835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.241 [2024-10-07 11:31:39.485871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.241 [2024-10-07 11:31:39.485890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.241 [2024-10-07 11:31:39.486011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.241 8384.33 IOPS, 32.75 MiB/s [2024-10-07T11:31:52.764Z] [2024-10-07 11:31:39.499808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.241 [2024-10-07 11:31:39.500162] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.241 [2024-10-07 11:31:39.500210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.241 [2024-10-07 11:31:39.500232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.241 [2024-10-07 11:31:39.500381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.241 [2024-10-07 11:31:39.500451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.241 [2024-10-07 11:31:39.500474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.241 [2024-10-07 11:31:39.500490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.241 [2024-10-07 11:31:39.500524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.241 [2024-10-07 11:31:39.510560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.241 [2024-10-07 11:31:39.510725] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.241 [2024-10-07 11:31:39.510759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.241 [2024-10-07 11:31:39.510778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.241 [2024-10-07 11:31:39.510814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.241 [2024-10-07 11:31:39.510847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.241 [2024-10-07 11:31:39.510866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.241 [2024-10-07 11:31:39.510881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.241 [2024-10-07 11:31:39.510912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.241 [2024-10-07 11:31:39.520683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.241 [2024-10-07 11:31:39.520831] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.241 [2024-10-07 11:31:39.520864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.241 [2024-10-07 11:31:39.520882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.241 [2024-10-07 11:31:39.520916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.241 [2024-10-07 11:31:39.520949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.241 [2024-10-07 11:31:39.520967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.241 [2024-10-07 11:31:39.520981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.241 [2024-10-07 11:31:39.522186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.241 [2024-10-07 11:31:39.531071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.241 [2024-10-07 11:31:39.531237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.241 [2024-10-07 11:31:39.531272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.241 [2024-10-07 11:31:39.531290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.242 [2024-10-07 11:31:39.531369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.242 [2024-10-07 11:31:39.531403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.242 [2024-10-07 11:31:39.531422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.242 [2024-10-07 11:31:39.531437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.242 [2024-10-07 11:31:39.531469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.242 [2024-10-07 11:31:39.541191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.242 [2024-10-07 11:31:39.541360] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.242 [2024-10-07 11:31:39.541394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.242 [2024-10-07 11:31:39.541413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.242 [2024-10-07 11:31:39.541619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.242 [2024-10-07 11:31:39.541762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.242 [2024-10-07 11:31:39.541799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.242 [2024-10-07 11:31:39.541818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.242 [2024-10-07 11:31:39.541946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.242 [2024-10-07 11:31:39.552540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.242 [2024-10-07 11:31:39.552706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.242 [2024-10-07 11:31:39.552739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.242 [2024-10-07 11:31:39.552758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.242 [2024-10-07 11:31:39.552793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.242 [2024-10-07 11:31:39.552826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.242 [2024-10-07 11:31:39.552845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.242 [2024-10-07 11:31:39.552861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.242 [2024-10-07 11:31:39.552897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.242 [2024-10-07 11:31:39.562680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.242 [2024-10-07 11:31:39.562836] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.242 [2024-10-07 11:31:39.562869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.242 [2024-10-07 11:31:39.562887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.242 [2024-10-07 11:31:39.562922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.242 [2024-10-07 11:31:39.562954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.242 [2024-10-07 11:31:39.562973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.242 [2024-10-07 11:31:39.563011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.242 [2024-10-07 11:31:39.563045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.242 [2024-10-07 11:31:39.573483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.242 [2024-10-07 11:31:39.573718] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.242 [2024-10-07 11:31:39.573752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.242 [2024-10-07 11:31:39.573771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.242 [2024-10-07 11:31:39.573812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.242 [2024-10-07 11:31:39.573847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.242 [2024-10-07 11:31:39.573866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.242 [2024-10-07 11:31:39.573881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.242 [2024-10-07 11:31:39.573912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.242 [2024-10-07 11:31:39.583602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.242 [2024-10-07 11:31:39.583719] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.242 [2024-10-07 11:31:39.583750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.242 [2024-10-07 11:31:39.583768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.242 [2024-10-07 11:31:39.583800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.242 [2024-10-07 11:31:39.583846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.242 [2024-10-07 11:31:39.583867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.242 [2024-10-07 11:31:39.583882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.242 [2024-10-07 11:31:39.583912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.242 [2024-10-07 11:31:39.595183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.242 [2024-10-07 11:31:39.595317] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.242 [2024-10-07 11:31:39.595363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.242 [2024-10-07 11:31:39.595382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.242 [2024-10-07 11:31:39.595415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.242 [2024-10-07 11:31:39.595448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.242 [2024-10-07 11:31:39.595466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.242 [2024-10-07 11:31:39.595481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.242 [2024-10-07 11:31:39.595512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.242 [2024-10-07 11:31:39.605299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.242 [2024-10-07 11:31:39.605433] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.242 [2024-10-07 11:31:39.605485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.242 [2024-10-07 11:31:39.605505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.242 [2024-10-07 11:31:39.605537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.242 [2024-10-07 11:31:39.606775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.242 [2024-10-07 11:31:39.606819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.242 [2024-10-07 11:31:39.606837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.242 [2024-10-07 11:31:39.607049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.242 [2024-10-07 11:31:39.615621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.242 [2024-10-07 11:31:39.615757] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.242 [2024-10-07 11:31:39.615790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.242 [2024-10-07 11:31:39.615808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.242 [2024-10-07 11:31:39.615841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.242 [2024-10-07 11:31:39.615874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.242 [2024-10-07 11:31:39.615893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.242 [2024-10-07 11:31:39.615908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.242 [2024-10-07 11:31:39.615939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.242 [2024-10-07 11:31:39.625727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.242 [2024-10-07 11:31:39.625864] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.242 [2024-10-07 11:31:39.625896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.242 [2024-10-07 11:31:39.625914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.242 [2024-10-07 11:31:39.626103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.242 [2024-10-07 11:31:39.626245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.242 [2024-10-07 11:31:39.626270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.242 [2024-10-07 11:31:39.626298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.243 [2024-10-07 11:31:39.626434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.243 [2024-10-07 11:31:39.636866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.243 [2024-10-07 11:31:39.636988] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.243 [2024-10-07 11:31:39.637020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.243 [2024-10-07 11:31:39.637038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.243 [2024-10-07 11:31:39.637071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.243 [2024-10-07 11:31:39.637126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.243 [2024-10-07 11:31:39.637146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.243 [2024-10-07 11:31:39.637160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.243 [2024-10-07 11:31:39.637191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.243 [2024-10-07 11:31:39.646960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.243 [2024-10-07 11:31:39.647077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.243 [2024-10-07 11:31:39.647108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.243 [2024-10-07 11:31:39.647126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.243 [2024-10-07 11:31:39.647158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.243 [2024-10-07 11:31:39.647190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.243 [2024-10-07 11:31:39.647209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.243 [2024-10-07 11:31:39.647231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.243 [2024-10-07 11:31:39.647268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.243 [2024-10-07 11:31:39.657289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.243 [2024-10-07 11:31:39.657431] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.243 [2024-10-07 11:31:39.657463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.243 [2024-10-07 11:31:39.657482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.243 [2024-10-07 11:31:39.657514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.243 [2024-10-07 11:31:39.657547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.243 [2024-10-07 11:31:39.657565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.243 [2024-10-07 11:31:39.657579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.243 [2024-10-07 11:31:39.657611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.243 [2024-10-07 11:31:39.667404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.243 [2024-10-07 11:31:39.667549] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.243 [2024-10-07 11:31:39.667583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.243 [2024-10-07 11:31:39.667608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.243 [2024-10-07 11:31:39.667641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.243 [2024-10-07 11:31:39.667673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.243 [2024-10-07 11:31:39.667691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.243 [2024-10-07 11:31:39.667707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.243 [2024-10-07 11:31:39.667894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.243 [2024-10-07 11:31:39.678878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.243 [2024-10-07 11:31:39.679037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.243 [2024-10-07 11:31:39.679070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.243 [2024-10-07 11:31:39.679089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.243 [2024-10-07 11:31:39.679128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.243 [2024-10-07 11:31:39.679161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.243 [2024-10-07 11:31:39.679179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.243 [2024-10-07 11:31:39.679195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.243 [2024-10-07 11:31:39.679226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.243 [2024-10-07 11:31:39.688995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.243 [2024-10-07 11:31:39.689115] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.243 [2024-10-07 11:31:39.689147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.243 [2024-10-07 11:31:39.689164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.243 [2024-10-07 11:31:39.689196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.243 [2024-10-07 11:31:39.689229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.243 [2024-10-07 11:31:39.689247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.243 [2024-10-07 11:31:39.689261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.243 [2024-10-07 11:31:39.689291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.243 [2024-10-07 11:31:39.699433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.243 [2024-10-07 11:31:39.699561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.243 [2024-10-07 11:31:39.699592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.243 [2024-10-07 11:31:39.699610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.243 [2024-10-07 11:31:39.699642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.243 [2024-10-07 11:31:39.699689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.243 [2024-10-07 11:31:39.699711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.243 [2024-10-07 11:31:39.699726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.243 [2024-10-07 11:31:39.699756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.243 [2024-10-07 11:31:39.709529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.243 [2024-10-07 11:31:39.709646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.243 [2024-10-07 11:31:39.709677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.243 [2024-10-07 11:31:39.709716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.243 [2024-10-07 11:31:39.709905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.243 [2024-10-07 11:31:39.710047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.243 [2024-10-07 11:31:39.710083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.243 [2024-10-07 11:31:39.710100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.243 [2024-10-07 11:31:39.710219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.243 [2024-10-07 11:31:39.720691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.243 [2024-10-07 11:31:39.720808] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.243 [2024-10-07 11:31:39.720841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.243 [2024-10-07 11:31:39.720858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.243 [2024-10-07 11:31:39.720891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.243 [2024-10-07 11:31:39.720926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.243 [2024-10-07 11:31:39.720947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.243 [2024-10-07 11:31:39.720961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.243 [2024-10-07 11:31:39.720992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.243 [2024-10-07 11:31:39.730782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.243 [2024-10-07 11:31:39.730905] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.243 [2024-10-07 11:31:39.730937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.243 [2024-10-07 11:31:39.730954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.243 [2024-10-07 11:31:39.730986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.243 [2024-10-07 11:31:39.731018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.243 [2024-10-07 11:31:39.731036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.243 [2024-10-07 11:31:39.731052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.243 [2024-10-07 11:31:39.731082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.243 [2024-10-07 11:31:39.741193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.243 [2024-10-07 11:31:39.741341] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.243 [2024-10-07 11:31:39.741374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.243 [2024-10-07 11:31:39.741391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.243 [2024-10-07 11:31:39.741424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.243 [2024-10-07 11:31:39.741457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.244 [2024-10-07 11:31:39.741493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.244 [2024-10-07 11:31:39.741508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.244 [2024-10-07 11:31:39.741540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.244 [2024-10-07 11:31:39.751310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.244 [2024-10-07 11:31:39.751439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.244 [2024-10-07 11:31:39.751471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.244 [2024-10-07 11:31:39.751488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.244 [2024-10-07 11:31:39.751675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.244 [2024-10-07 11:31:39.751818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.244 [2024-10-07 11:31:39.751853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.244 [2024-10-07 11:31:39.751870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.244 [2024-10-07 11:31:39.751988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.244 [2024-10-07 11:31:39.762576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.244 [2024-10-07 11:31:39.762693] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.244 [2024-10-07 11:31:39.762725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.244 [2024-10-07 11:31:39.762743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.244 [2024-10-07 11:31:39.762775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.244 [2024-10-07 11:31:39.762814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.244 [2024-10-07 11:31:39.762833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.244 [2024-10-07 11:31:39.762847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.244 [2024-10-07 11:31:39.762877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.244 [2024-10-07 11:31:39.772670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.244 [2024-10-07 11:31:39.772785] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.244 [2024-10-07 11:31:39.772817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.244 [2024-10-07 11:31:39.772834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.244 [2024-10-07 11:31:39.772865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.244 [2024-10-07 11:31:39.772908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.244 [2024-10-07 11:31:39.772925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.244 [2024-10-07 11:31:39.772939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.244 [2024-10-07 11:31:39.772968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.244 [2024-10-07 11:31:39.783254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.244 [2024-10-07 11:31:39.783411] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.244 [2024-10-07 11:31:39.783444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.244 [2024-10-07 11:31:39.783462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.244 [2024-10-07 11:31:39.783494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.244 [2024-10-07 11:31:39.783526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.244 [2024-10-07 11:31:39.783543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.244 [2024-10-07 11:31:39.783557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.244 [2024-10-07 11:31:39.783588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.244 [2024-10-07 11:31:39.793433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.244 [2024-10-07 11:31:39.793550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.244 [2024-10-07 11:31:39.793580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.244 [2024-10-07 11:31:39.793597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.244 [2024-10-07 11:31:39.793628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.244 [2024-10-07 11:31:39.793810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.244 [2024-10-07 11:31:39.793836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.244 [2024-10-07 11:31:39.793850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.244 [2024-10-07 11:31:39.793996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.244 [2024-10-07 11:31:39.804906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.244 [2024-10-07 11:31:39.805027] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.244 [2024-10-07 11:31:39.805063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.244 [2024-10-07 11:31:39.805080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.244 [2024-10-07 11:31:39.805113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.244 [2024-10-07 11:31:39.805144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.244 [2024-10-07 11:31:39.805163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.244 [2024-10-07 11:31:39.805176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.244 [2024-10-07 11:31:39.805206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.244 [2024-10-07 11:31:39.815015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.244 [2024-10-07 11:31:39.815134] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.244 [2024-10-07 11:31:39.815165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.244 [2024-10-07 11:31:39.815182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.244 [2024-10-07 11:31:39.815232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.244 [2024-10-07 11:31:39.815264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.244 [2024-10-07 11:31:39.815282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.244 [2024-10-07 11:31:39.815296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.244 [2024-10-07 11:31:39.815341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.244 [2024-10-07 11:31:39.825364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.244 [2024-10-07 11:31:39.825490] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.244 [2024-10-07 11:31:39.825521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.244 [2024-10-07 11:31:39.825538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.244 [2024-10-07 11:31:39.825571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.244 [2024-10-07 11:31:39.825603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.244 [2024-10-07 11:31:39.825621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.244 [2024-10-07 11:31:39.825636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.244 [2024-10-07 11:31:39.825666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.244 [2024-10-07 11:31:39.835453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.244 [2024-10-07 11:31:39.835575] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.244 [2024-10-07 11:31:39.835606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.244 [2024-10-07 11:31:39.835624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.244 [2024-10-07 11:31:39.835810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.244 [2024-10-07 11:31:39.835958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.244 [2024-10-07 11:31:39.835993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.244 [2024-10-07 11:31:39.836010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.244 [2024-10-07 11:31:39.836128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.244 [2024-10-07 11:31:39.846890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.244 [2024-10-07 11:31:39.847024] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.244 [2024-10-07 11:31:39.847055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.244 [2024-10-07 11:31:39.847073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.244 [2024-10-07 11:31:39.847105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.244 [2024-10-07 11:31:39.847137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.244 [2024-10-07 11:31:39.847155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.244 [2024-10-07 11:31:39.847190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.244 [2024-10-07 11:31:39.847224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.244 [2024-10-07 11:31:39.856982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.244 [2024-10-07 11:31:39.857099] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.244 [2024-10-07 11:31:39.857130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.244 [2024-10-07 11:31:39.857147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.245 [2024-10-07 11:31:39.857179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.245 [2024-10-07 11:31:39.857211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.245 [2024-10-07 11:31:39.857228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.245 [2024-10-07 11:31:39.857242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.245 [2024-10-07 11:31:39.857272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.245 [2024-10-07 11:31:39.867551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.245 [2024-10-07 11:31:39.867693] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.245 [2024-10-07 11:31:39.867724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.245 [2024-10-07 11:31:39.867758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.245 [2024-10-07 11:31:39.867789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.245 [2024-10-07 11:31:39.867820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.245 [2024-10-07 11:31:39.867838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.245 [2024-10-07 11:31:39.867851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.245 [2024-10-07 11:31:39.867881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.245 [2024-10-07 11:31:39.877647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.245 [2024-10-07 11:31:39.877764] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.245 [2024-10-07 11:31:39.877803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.245 [2024-10-07 11:31:39.877820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.245 [2024-10-07 11:31:39.877851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.245 [2024-10-07 11:31:39.877884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.245 [2024-10-07 11:31:39.877901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.245 [2024-10-07 11:31:39.877915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.245 [2024-10-07 11:31:39.878100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.245 [2024-10-07 11:31:39.888949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.245 [2024-10-07 11:31:39.889076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.245 [2024-10-07 11:31:39.889122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.245 [2024-10-07 11:31:39.889141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.245 [2024-10-07 11:31:39.889173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.245 [2024-10-07 11:31:39.889206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.245 [2024-10-07 11:31:39.889223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.245 [2024-10-07 11:31:39.889238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.245 [2024-10-07 11:31:39.889268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.245 [2024-10-07 11:31:39.899043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.245 [2024-10-07 11:31:39.899162] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.245 [2024-10-07 11:31:39.899193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.245 [2024-10-07 11:31:39.899210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.245 [2024-10-07 11:31:39.899242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.245 [2024-10-07 11:31:39.899274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.245 [2024-10-07 11:31:39.899292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.245 [2024-10-07 11:31:39.899307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.245 [2024-10-07 11:31:39.899353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.245 [2024-10-07 11:31:39.909405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.245 [2024-10-07 11:31:39.909530] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.245 [2024-10-07 11:31:39.909562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.245 [2024-10-07 11:31:39.909579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.245 [2024-10-07 11:31:39.909612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.245 [2024-10-07 11:31:39.909643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.245 [2024-10-07 11:31:39.909661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.245 [2024-10-07 11:31:39.909676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.245 [2024-10-07 11:31:39.909705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.245 [2024-10-07 11:31:39.919498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.245 [2024-10-07 11:31:39.919768] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.245 [2024-10-07 11:31:39.919811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.245 [2024-10-07 11:31:39.919831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.245 [2024-10-07 11:31:39.919963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.245 [2024-10-07 11:31:39.920106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.245 [2024-10-07 11:31:39.920132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.245 [2024-10-07 11:31:39.920147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.245 [2024-10-07 11:31:39.920203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.245 [2024-10-07 11:31:39.930523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.245 [2024-10-07 11:31:39.930640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.245 [2024-10-07 11:31:39.930672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.245 [2024-10-07 11:31:39.930689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.245 [2024-10-07 11:31:39.930721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.245 [2024-10-07 11:31:39.930752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.245 [2024-10-07 11:31:39.930770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.245 [2024-10-07 11:31:39.930785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.245 [2024-10-07 11:31:39.930815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.245 [2024-10-07 11:31:39.940613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.245 [2024-10-07 11:31:39.940730] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.245 [2024-10-07 11:31:39.940761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.245 [2024-10-07 11:31:39.940778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.245 [2024-10-07 11:31:39.940810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.245 [2024-10-07 11:31:39.942012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.245 [2024-10-07 11:31:39.942051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.245 [2024-10-07 11:31:39.942069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.245 [2024-10-07 11:31:39.942280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.245 [2024-10-07 11:31:39.950911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.245 [2024-10-07 11:31:39.951028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.245 [2024-10-07 11:31:39.951060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.245 [2024-10-07 11:31:39.951077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.245 [2024-10-07 11:31:39.951109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.245 [2024-10-07 11:31:39.951141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.245 [2024-10-07 11:31:39.951160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.245 [2024-10-07 11:31:39.951174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.245 [2024-10-07 11:31:39.951204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.245 [2024-10-07 11:31:39.961031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.245 [2024-10-07 11:31:39.961446] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.245 [2024-10-07 11:31:39.961508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.245 [2024-10-07 11:31:39.961541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.245 [2024-10-07 11:31:39.961722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.245 [2024-10-07 11:31:39.961911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.245 [2024-10-07 11:31:39.961962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.245 [2024-10-07 11:31:39.962008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.245 [2024-10-07 11:31:39.962095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.245 [2024-10-07 11:31:39.972210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.245 [2024-10-07 11:31:39.972349] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.245 [2024-10-07 11:31:39.972383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.246 [2024-10-07 11:31:39.972401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.246 [2024-10-07 11:31:39.972434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.246 [2024-10-07 11:31:39.972467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.246 [2024-10-07 11:31:39.972485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.246 [2024-10-07 11:31:39.972499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.246 [2024-10-07 11:31:39.972530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.246 [2024-10-07 11:31:39.982332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.246 [2024-10-07 11:31:39.982460] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.246 [2024-10-07 11:31:39.982492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.246 [2024-10-07 11:31:39.982509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.246 [2024-10-07 11:31:39.983702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.246 [2024-10-07 11:31:39.983934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.246 [2024-10-07 11:31:39.983981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.246 [2024-10-07 11:31:39.983999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.246 [2024-10-07 11:31:39.984819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.246 [2024-10-07 11:31:39.992472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.246 [2024-10-07 11:31:39.992589] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.246 [2024-10-07 11:31:39.992620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.246 [2024-10-07 11:31:39.992655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.246 [2024-10-07 11:31:39.992689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.246 [2024-10-07 11:31:39.992721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.246 [2024-10-07 11:31:39.992740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.246 [2024-10-07 11:31:39.992754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.246 [2024-10-07 11:31:39.992785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.246 [2024-10-07 11:31:40.002562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.246 [2024-10-07 11:31:40.002858] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.246 [2024-10-07 11:31:40.002903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.246 [2024-10-07 11:31:40.002923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.246 [2024-10-07 11:31:40.003055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.246 [2024-10-07 11:31:40.003181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.246 [2024-10-07 11:31:40.003216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.246 [2024-10-07 11:31:40.003233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.246 [2024-10-07 11:31:40.003289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.246 [2024-10-07 11:31:40.013629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.246 [2024-10-07 11:31:40.013748] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.246 [2024-10-07 11:31:40.013780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.246 [2024-10-07 11:31:40.013798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.246 [2024-10-07 11:31:40.013830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.246 [2024-10-07 11:31:40.013862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.246 [2024-10-07 11:31:40.013881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.246 [2024-10-07 11:31:40.013896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.246 [2024-10-07 11:31:40.013926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.246 [2024-10-07 11:31:40.023719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.246 [2024-10-07 11:31:40.023837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.246 [2024-10-07 11:31:40.023868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.246 [2024-10-07 11:31:40.023886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.246 [2024-10-07 11:31:40.023918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.246 [2024-10-07 11:31:40.023949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.246 [2024-10-07 11:31:40.023986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.246 [2024-10-07 11:31:40.024001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.246 [2024-10-07 11:31:40.024032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.246 [2024-10-07 11:31:40.034337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.246 [2024-10-07 11:31:40.034470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.246 [2024-10-07 11:31:40.034503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.246 [2024-10-07 11:31:40.034520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.246 [2024-10-07 11:31:40.034553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.246 [2024-10-07 11:31:40.034585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.246 [2024-10-07 11:31:40.034604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.246 [2024-10-07 11:31:40.034618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.246 [2024-10-07 11:31:40.034648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.246 [2024-10-07 11:31:40.044429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.246 [2024-10-07 11:31:40.044549] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.246 [2024-10-07 11:31:40.044579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.246 [2024-10-07 11:31:40.044597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.246 [2024-10-07 11:31:40.044783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.246 [2024-10-07 11:31:40.044925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.246 [2024-10-07 11:31:40.044970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.246 [2024-10-07 11:31:40.044997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.246 [2024-10-07 11:31:40.045119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.246 [2024-10-07 11:31:40.055704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.246 [2024-10-07 11:31:40.055833] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.246 [2024-10-07 11:31:40.055866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.246 [2024-10-07 11:31:40.055883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.246 [2024-10-07 11:31:40.055916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.246 [2024-10-07 11:31:40.055948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.246 [2024-10-07 11:31:40.055967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.246 [2024-10-07 11:31:40.055981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.246 [2024-10-07 11:31:40.056012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.246 [2024-10-07 11:31:40.065808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.246 [2024-10-07 11:31:40.065951] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.246 [2024-10-07 11:31:40.065984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.246 [2024-10-07 11:31:40.066002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.246 [2024-10-07 11:31:40.066034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.246 [2024-10-07 11:31:40.066067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.246 [2024-10-07 11:31:40.066085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.246 [2024-10-07 11:31:40.066099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.246 [2024-10-07 11:31:40.066130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.246 [2024-10-07 11:31:40.076645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.246 [2024-10-07 11:31:40.076771] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.246 [2024-10-07 11:31:40.076803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.246 [2024-10-07 11:31:40.076820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.246 [2024-10-07 11:31:40.076853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.246 [2024-10-07 11:31:40.076885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.246 [2024-10-07 11:31:40.076903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.246 [2024-10-07 11:31:40.076917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.246 [2024-10-07 11:31:40.076947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.247 [2024-10-07 11:31:40.086738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.247 [2024-10-07 11:31:40.086852] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.247 [2024-10-07 11:31:40.086884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.247 [2024-10-07 11:31:40.086902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.247 [2024-10-07 11:31:40.086935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.247 [2024-10-07 11:31:40.086967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.247 [2024-10-07 11:31:40.086985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.247 [2024-10-07 11:31:40.086999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.247 [2024-10-07 11:31:40.087197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.247 [2024-10-07 11:31:40.098398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.247 [2024-10-07 11:31:40.098525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.247 [2024-10-07 11:31:40.098557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.247 [2024-10-07 11:31:40.098575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.247 [2024-10-07 11:31:40.098630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.247 [2024-10-07 11:31:40.098663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.247 [2024-10-07 11:31:40.098682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.247 [2024-10-07 11:31:40.098697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.247 [2024-10-07 11:31:40.098727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.247 [2024-10-07 11:31:40.108566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.247 [2024-10-07 11:31:40.108683] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.247 [2024-10-07 11:31:40.108714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.247 [2024-10-07 11:31:40.108732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.247 [2024-10-07 11:31:40.108764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.247 [2024-10-07 11:31:40.108796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.247 [2024-10-07 11:31:40.108814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.247 [2024-10-07 11:31:40.108827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.247 [2024-10-07 11:31:40.108857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.247 [2024-10-07 11:31:40.119243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.247 [2024-10-07 11:31:40.119392] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.247 [2024-10-07 11:31:40.119425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.247 [2024-10-07 11:31:40.119442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.247 [2024-10-07 11:31:40.119475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.247 [2024-10-07 11:31:40.119508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.247 [2024-10-07 11:31:40.119526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.247 [2024-10-07 11:31:40.119540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.247 [2024-10-07 11:31:40.119571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.247 [2024-10-07 11:31:40.129351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.247 [2024-10-07 11:31:40.129470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.247 [2024-10-07 11:31:40.129502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.247 [2024-10-07 11:31:40.129520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.247 [2024-10-07 11:31:40.129553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.247 [2024-10-07 11:31:40.129586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.247 [2024-10-07 11:31:40.129604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.247 [2024-10-07 11:31:40.129643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.247 [2024-10-07 11:31:40.129676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.247 [2024-10-07 11:31:40.141478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.247 [2024-10-07 11:31:40.141610] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.247 [2024-10-07 11:31:40.141642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.247 [2024-10-07 11:31:40.141660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.247 [2024-10-07 11:31:40.141693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.247 [2024-10-07 11:31:40.141726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.247 [2024-10-07 11:31:40.141745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.247 [2024-10-07 11:31:40.141760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.247 [2024-10-07 11:31:40.141789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.247 [2024-10-07 11:31:40.151677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.247 [2024-10-07 11:31:40.151799] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.247 [2024-10-07 11:31:40.151831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.247 [2024-10-07 11:31:40.151849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.247 [2024-10-07 11:31:40.151881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.247 [2024-10-07 11:31:40.151914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.247 [2024-10-07 11:31:40.151932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.247 [2024-10-07 11:31:40.151947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.247 [2024-10-07 11:31:40.151976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.247 [2024-10-07 11:31:40.162415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.247 [2024-10-07 11:31:40.162548] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.247 [2024-10-07 11:31:40.162581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.247 [2024-10-07 11:31:40.162599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.247 [2024-10-07 11:31:40.162632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.247 [2024-10-07 11:31:40.162665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.247 [2024-10-07 11:31:40.162683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.247 [2024-10-07 11:31:40.162698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.247 [2024-10-07 11:31:40.162729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.247 [2024-10-07 11:31:40.172520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.247 [2024-10-07 11:31:40.172640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.247 [2024-10-07 11:31:40.172694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.247 [2024-10-07 11:31:40.172714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.247 [2024-10-07 11:31:40.172747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.247 [2024-10-07 11:31:40.172779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.247 [2024-10-07 11:31:40.172797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.247 [2024-10-07 11:31:40.172811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.247 [2024-10-07 11:31:40.172841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.247 [2024-10-07 11:31:40.184212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.247 [2024-10-07 11:31:40.184352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.247 [2024-10-07 11:31:40.184385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.247 [2024-10-07 11:31:40.184403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.248 [2024-10-07 11:31:40.184437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.248 [2024-10-07 11:31:40.184470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.248 [2024-10-07 11:31:40.184489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.248 [2024-10-07 11:31:40.184503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.248 [2024-10-07 11:31:40.184534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.248 [2024-10-07 11:31:40.194516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.248 [2024-10-07 11:31:40.194641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.248 [2024-10-07 11:31:40.194674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.248 [2024-10-07 11:31:40.194692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.248 [2024-10-07 11:31:40.194725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.248 [2024-10-07 11:31:40.194758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.248 [2024-10-07 11:31:40.194776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.248 [2024-10-07 11:31:40.194791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.248 [2024-10-07 11:31:40.194821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.248 [2024-10-07 11:31:40.205283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.248 [2024-10-07 11:31:40.205432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.248 [2024-10-07 11:31:40.205464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.248 [2024-10-07 11:31:40.205481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.248 [2024-10-07 11:31:40.205514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.248 [2024-10-07 11:31:40.205572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.248 [2024-10-07 11:31:40.205591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.248 [2024-10-07 11:31:40.205606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.248 [2024-10-07 11:31:40.205637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.248 [2024-10-07 11:31:40.215390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.248 [2024-10-07 11:31:40.215516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.248 [2024-10-07 11:31:40.215547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.248 [2024-10-07 11:31:40.215565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.248 [2024-10-07 11:31:40.215597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.248 [2024-10-07 11:31:40.215629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.248 [2024-10-07 11:31:40.215647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.248 [2024-10-07 11:31:40.215661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.248 [2024-10-07 11:31:40.215691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.248 [2024-10-07 11:31:40.227154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.248 [2024-10-07 11:31:40.227288] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.248 [2024-10-07 11:31:40.227333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.248 [2024-10-07 11:31:40.227353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.248 [2024-10-07 11:31:40.227387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.248 [2024-10-07 11:31:40.227419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.248 [2024-10-07 11:31:40.227437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.248 [2024-10-07 11:31:40.227452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.248 [2024-10-07 11:31:40.227483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.248 [2024-10-07 11:31:40.237339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.248 [2024-10-07 11:31:40.237468] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.248 [2024-10-07 11:31:40.237500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.248 [2024-10-07 11:31:40.237518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.248 [2024-10-07 11:31:40.237550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.248 [2024-10-07 11:31:40.237581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.248 [2024-10-07 11:31:40.237599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.248 [2024-10-07 11:31:40.237614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.248 [2024-10-07 11:31:40.237669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.248 [2024-10-07 11:31:40.247967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.248 [2024-10-07 11:31:40.248094] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.248 [2024-10-07 11:31:40.248125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.248 [2024-10-07 11:31:40.248143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.248 [2024-10-07 11:31:40.248175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.248 [2024-10-07 11:31:40.248207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.248 [2024-10-07 11:31:40.248225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.248 [2024-10-07 11:31:40.248240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.248 [2024-10-07 11:31:40.248270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.248 [2024-10-07 11:31:40.258062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.248 [2024-10-07 11:31:40.258186] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.248 [2024-10-07 11:31:40.258218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.248 [2024-10-07 11:31:40.258235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.248 [2024-10-07 11:31:40.258463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.248 [2024-10-07 11:31:40.258607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.248 [2024-10-07 11:31:40.258634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.248 [2024-10-07 11:31:40.258650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.248 [2024-10-07 11:31:40.258766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.248 [2024-10-07 11:31:40.269336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.248 [2024-10-07 11:31:40.269456] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.248 [2024-10-07 11:31:40.269488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.248 [2024-10-07 11:31:40.269506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.248 [2024-10-07 11:31:40.269538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.248 [2024-10-07 11:31:40.269571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.248 [2024-10-07 11:31:40.269589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.248 [2024-10-07 11:31:40.269603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.248 [2024-10-07 11:31:40.269633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.248 [2024-10-07 11:31:40.279433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.248 [2024-10-07 11:31:40.279551] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.248 [2024-10-07 11:31:40.279583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.248 [2024-10-07 11:31:40.279629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.248 [2024-10-07 11:31:40.279664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.248 [2024-10-07 11:31:40.279696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.248 [2024-10-07 11:31:40.279714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.248 [2024-10-07 11:31:40.279728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.248 [2024-10-07 11:31:40.279759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.248 [2024-10-07 11:31:40.290069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.248 [2024-10-07 11:31:40.290194] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.248 [2024-10-07 11:31:40.290226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.248 [2024-10-07 11:31:40.290244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.248 [2024-10-07 11:31:40.290276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.248 [2024-10-07 11:31:40.290340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.248 [2024-10-07 11:31:40.290362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.248 [2024-10-07 11:31:40.290376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.248 [2024-10-07 11:31:40.290407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.248 [2024-10-07 11:31:40.300164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.248 [2024-10-07 11:31:40.300291] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.248 [2024-10-07 11:31:40.300337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.249 [2024-10-07 11:31:40.300356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.249 [2024-10-07 11:31:40.300543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.249 [2024-10-07 11:31:40.300684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.249 [2024-10-07 11:31:40.300720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.249 [2024-10-07 11:31:40.300737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.249 [2024-10-07 11:31:40.300866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.249 [2024-10-07 11:31:40.311368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.249 [2024-10-07 11:31:40.311486] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.249 [2024-10-07 11:31:40.311517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.249 [2024-10-07 11:31:40.311534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.249 [2024-10-07 11:31:40.311566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.249 [2024-10-07 11:31:40.311598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.249 [2024-10-07 11:31:40.311636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.249 [2024-10-07 11:31:40.311651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.249 [2024-10-07 11:31:40.311683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.249 [2024-10-07 11:31:40.321462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.249 [2024-10-07 11:31:40.321579] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.249 [2024-10-07 11:31:40.321610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.249 [2024-10-07 11:31:40.321627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.249 [2024-10-07 11:31:40.321659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.249 [2024-10-07 11:31:40.321692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.249 [2024-10-07 11:31:40.321710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.249 [2024-10-07 11:31:40.321724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.249 [2024-10-07 11:31:40.321754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.249 [2024-10-07 11:31:40.331806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.249 [2024-10-07 11:31:40.331947] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.249 [2024-10-07 11:31:40.331978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.249 [2024-10-07 11:31:40.331996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.249 [2024-10-07 11:31:40.332029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.249 [2024-10-07 11:31:40.332076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.249 [2024-10-07 11:31:40.332098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.249 [2024-10-07 11:31:40.332112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.249 [2024-10-07 11:31:40.332142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.249 [2024-10-07 11:31:40.341911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.249 [2024-10-07 11:31:40.342027] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.249 [2024-10-07 11:31:40.342070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.249 [2024-10-07 11:31:40.342087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.249 [2024-10-07 11:31:40.342274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.249 [2024-10-07 11:31:40.342446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.249 [2024-10-07 11:31:40.342474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.249 [2024-10-07 11:31:40.342490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.249 [2024-10-07 11:31:40.342607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.249 [2024-10-07 11:31:40.353036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.249 [2024-10-07 11:31:40.353174] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.249 [2024-10-07 11:31:40.353206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.249 [2024-10-07 11:31:40.353223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.249 [2024-10-07 11:31:40.353255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.249 [2024-10-07 11:31:40.353287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.249 [2024-10-07 11:31:40.353306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.249 [2024-10-07 11:31:40.353334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.249 [2024-10-07 11:31:40.353369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.249 [2024-10-07 11:31:40.363146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.249 [2024-10-07 11:31:40.363263] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.249 [2024-10-07 11:31:40.363294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.249 [2024-10-07 11:31:40.363311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.249 [2024-10-07 11:31:40.363359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.249 [2024-10-07 11:31:40.363392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.249 [2024-10-07 11:31:40.363410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.249 [2024-10-07 11:31:40.363424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.249 [2024-10-07 11:31:40.363454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.249 [2024-10-07 11:31:40.373765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.249 [2024-10-07 11:31:40.373890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.249 [2024-10-07 11:31:40.373922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.249 [2024-10-07 11:31:40.373939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.249 [2024-10-07 11:31:40.373971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.249 [2024-10-07 11:31:40.374004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.249 [2024-10-07 11:31:40.374022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.249 [2024-10-07 11:31:40.374036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.249 [2024-10-07 11:31:40.374066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.249 [2024-10-07 11:31:40.383874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.249 [2024-10-07 11:31:40.384009] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.249 [2024-10-07 11:31:40.384041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.249 [2024-10-07 11:31:40.384058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.249 [2024-10-07 11:31:40.384112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.249 [2024-10-07 11:31:40.384144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.249 [2024-10-07 11:31:40.384162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.249 [2024-10-07 11:31:40.384176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.249 [2024-10-07 11:31:40.384386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.249 [2024-10-07 11:31:40.395462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.249 [2024-10-07 11:31:40.395581] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.249 [2024-10-07 11:31:40.395613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.249 [2024-10-07 11:31:40.395631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.249 [2024-10-07 11:31:40.395662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.249 [2024-10-07 11:31:40.395694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.249 [2024-10-07 11:31:40.395712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.249 [2024-10-07 11:31:40.395727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.249 [2024-10-07 11:31:40.395757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.249 [2024-10-07 11:31:40.405561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.249 [2024-10-07 11:31:40.405677] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.249 [2024-10-07 11:31:40.405709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.249 [2024-10-07 11:31:40.405726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.249 [2024-10-07 11:31:40.405758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.249 [2024-10-07 11:31:40.405789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.249 [2024-10-07 11:31:40.405807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.249 [2024-10-07 11:31:40.405821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.249 [2024-10-07 11:31:40.405865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.249 [2024-10-07 11:31:40.415967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.249 [2024-10-07 11:31:40.416092] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.250 [2024-10-07 11:31:40.416124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.250 [2024-10-07 11:31:40.416142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.250 [2024-10-07 11:31:40.416174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.250 [2024-10-07 11:31:40.416206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.250 [2024-10-07 11:31:40.416224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.250 [2024-10-07 11:31:40.416254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.250 [2024-10-07 11:31:40.416287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.250 [2024-10-07 11:31:40.427027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.250 [2024-10-07 11:31:40.427149] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.250 [2024-10-07 11:31:40.427181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.250 [2024-10-07 11:31:40.427198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.250 [2024-10-07 11:31:40.427230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.250 [2024-10-07 11:31:40.427276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.250 [2024-10-07 11:31:40.427297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.250 [2024-10-07 11:31:40.427312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.250 [2024-10-07 11:31:40.427361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.250 [2024-10-07 11:31:40.439182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.250 [2024-10-07 11:31:40.439372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.250 [2024-10-07 11:31:40.439405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.250 [2024-10-07 11:31:40.439422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.250 [2024-10-07 11:31:40.439455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.250 [2024-10-07 11:31:40.439488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.250 [2024-10-07 11:31:40.439506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.250 [2024-10-07 11:31:40.439520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.250 [2024-10-07 11:31:40.439565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.250 [2024-10-07 11:31:40.449278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.250 [2024-10-07 11:31:40.449404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.250 [2024-10-07 11:31:40.449437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.250 [2024-10-07 11:31:40.449455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.250 [2024-10-07 11:31:40.449486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.250 [2024-10-07 11:31:40.449518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.250 [2024-10-07 11:31:40.449536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.250 [2024-10-07 11:31:40.449551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.250 [2024-10-07 11:31:40.449581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.250 [2024-10-07 11:31:40.459933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.250 [2024-10-07 11:31:40.460058] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.250 [2024-10-07 11:31:40.460110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.250 [2024-10-07 11:31:40.460130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.250 [2024-10-07 11:31:40.460176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.250 [2024-10-07 11:31:40.460211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.250 [2024-10-07 11:31:40.460230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.250 [2024-10-07 11:31:40.460244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.250 [2024-10-07 11:31:40.460275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.250 [2024-10-07 11:31:40.470027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.250 [2024-10-07 11:31:40.470153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.250 [2024-10-07 11:31:40.470185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.250 [2024-10-07 11:31:40.470203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.250 [2024-10-07 11:31:40.470235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.250 [2024-10-07 11:31:40.470267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.250 [2024-10-07 11:31:40.470299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.250 [2024-10-07 11:31:40.470329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.250 [2024-10-07 11:31:40.470365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.250 [2024-10-07 11:31:40.481447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.250 [2024-10-07 11:31:40.481567] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.250 [2024-10-07 11:31:40.481599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.250 [2024-10-07 11:31:40.481617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.250 [2024-10-07 11:31:40.481649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.250 [2024-10-07 11:31:40.481682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.250 [2024-10-07 11:31:40.481700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.250 [2024-10-07 11:31:40.481715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.250 [2024-10-07 11:31:40.481745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.250 [2024-10-07 11:31:40.491538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.250 [2024-10-07 11:31:40.491653] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.250 [2024-10-07 11:31:40.491685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.250 [2024-10-07 11:31:40.491707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.250 [2024-10-07 11:31:40.491754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.250 [2024-10-07 11:31:40.491837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.250 [2024-10-07 11:31:40.491869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.250 [2024-10-07 11:31:40.491891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.250 [2024-10-07 11:31:40.493396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.250 8562.25 IOPS, 33.45 MiB/s [2024-10-07T11:31:52.773Z] [2024-10-07 11:31:40.502806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.250 [2024-10-07 11:31:40.503018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.250 [2024-10-07 11:31:40.503052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.250 [2024-10-07 11:31:40.503071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.250 [2024-10-07 11:31:40.504287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.250 [2024-10-07 11:31:40.505397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.250 [2024-10-07 11:31:40.505436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.250 [2024-10-07 11:31:40.505455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.250 [2024-10-07 11:31:40.505675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.250 [2024-10-07 11:31:40.512909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.250 [2024-10-07 11:31:40.513028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.250 [2024-10-07 11:31:40.513061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.250 [2024-10-07 11:31:40.513080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.250 [2024-10-07 11:31:40.513112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.250 [2024-10-07 11:31:40.513144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.250 [2024-10-07 11:31:40.513162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.250 [2024-10-07 11:31:40.513176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.250 [2024-10-07 11:31:40.513206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.250 [2024-10-07 11:31:40.523213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.250 [2024-10-07 11:31:40.523348] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.250 [2024-10-07 11:31:40.523382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.250 [2024-10-07 11:31:40.523400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.250 [2024-10-07 11:31:40.523434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.250 [2024-10-07 11:31:40.523466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.250 [2024-10-07 11:31:40.523484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.250 [2024-10-07 11:31:40.523498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.250 [2024-10-07 11:31:40.523551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.250 [2024-10-07 11:31:40.533308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.251 [2024-10-07 11:31:40.533439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.251 [2024-10-07 11:31:40.533471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.251 [2024-10-07 11:31:40.533489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.251 [2024-10-07 11:31:40.533521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.251 [2024-10-07 11:31:40.533553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.251 [2024-10-07 11:31:40.533572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.251 [2024-10-07 11:31:40.533586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.251 [2024-10-07 11:31:40.533616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.251 [2024-10-07 11:31:40.543758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.251 [2024-10-07 11:31:40.543890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.251 [2024-10-07 11:31:40.543922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.251 [2024-10-07 11:31:40.543939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.251 [2024-10-07 11:31:40.543972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.251 [2024-10-07 11:31:40.544004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.251 [2024-10-07 11:31:40.544022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.251 [2024-10-07 11:31:40.544036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.251 [2024-10-07 11:31:40.544066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.251 [2024-10-07 11:31:40.553857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.251 [2024-10-07 11:31:40.553976] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.251 [2024-10-07 11:31:40.554008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.251 [2024-10-07 11:31:40.554026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.251 [2024-10-07 11:31:40.554212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.251 [2024-10-07 11:31:40.554388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.251 [2024-10-07 11:31:40.554416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.251 [2024-10-07 11:31:40.554431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.251 [2024-10-07 11:31:40.554549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.251 [2024-10-07 11:31:40.565073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.251 [2024-10-07 11:31:40.565195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.251 [2024-10-07 11:31:40.565228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.251 [2024-10-07 11:31:40.565272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.251 [2024-10-07 11:31:40.565307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.251 [2024-10-07 11:31:40.565357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.251 [2024-10-07 11:31:40.565376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.251 [2024-10-07 11:31:40.565391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.251 [2024-10-07 11:31:40.565421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.251 [2024-10-07 11:31:40.575173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.251 [2024-10-07 11:31:40.575294] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.251 [2024-10-07 11:31:40.575342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.251 [2024-10-07 11:31:40.575362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.251 [2024-10-07 11:31:40.575395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.251 [2024-10-07 11:31:40.575427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.251 [2024-10-07 11:31:40.575445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.251 [2024-10-07 11:31:40.575460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.251 [2024-10-07 11:31:40.575490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.251 [2024-10-07 11:31:40.585657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.251 [2024-10-07 11:31:40.585782] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.251 [2024-10-07 11:31:40.585814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.251 [2024-10-07 11:31:40.585831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.251 [2024-10-07 11:31:40.585864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.251 [2024-10-07 11:31:40.585912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.251 [2024-10-07 11:31:40.585933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.251 [2024-10-07 11:31:40.585948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.251 [2024-10-07 11:31:40.585979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.251 [2024-10-07 11:31:40.595755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.251 [2024-10-07 11:31:40.595878] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.251 [2024-10-07 11:31:40.595910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.251 [2024-10-07 11:31:40.595927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.251 [2024-10-07 11:31:40.596116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.251 [2024-10-07 11:31:40.596258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.251 [2024-10-07 11:31:40.596311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.251 [2024-10-07 11:31:40.596344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.251 [2024-10-07 11:31:40.596466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.251 [2024-10-07 11:31:40.607015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.251 [2024-10-07 11:31:40.607136] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.251 [2024-10-07 11:31:40.607168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.251 [2024-10-07 11:31:40.607186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.251 [2024-10-07 11:31:40.607218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.251 [2024-10-07 11:31:40.607250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.251 [2024-10-07 11:31:40.607268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.251 [2024-10-07 11:31:40.607282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.251 [2024-10-07 11:31:40.607312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.251 [2024-10-07 11:31:40.617115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.251 [2024-10-07 11:31:40.617232] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.251 [2024-10-07 11:31:40.617263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.251 [2024-10-07 11:31:40.617281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.251 [2024-10-07 11:31:40.617312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.251 [2024-10-07 11:31:40.617370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.251 [2024-10-07 11:31:40.617389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.251 [2024-10-07 11:31:40.617403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.251 [2024-10-07 11:31:40.617434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.251 [2024-10-07 11:31:40.627644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.251 [2024-10-07 11:31:40.627770] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.251 [2024-10-07 11:31:40.627801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.251 [2024-10-07 11:31:40.627819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.251 [2024-10-07 11:31:40.627851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.251 [2024-10-07 11:31:40.627884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.252 [2024-10-07 11:31:40.627903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.252 [2024-10-07 11:31:40.627917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.252 [2024-10-07 11:31:40.627948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.252 [2024-10-07 11:31:40.637754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.252 [2024-10-07 11:31:40.637870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.252 [2024-10-07 11:31:40.637902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.252 [2024-10-07 11:31:40.637920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.252 [2024-10-07 11:31:40.637951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.252 [2024-10-07 11:31:40.638138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.252 [2024-10-07 11:31:40.638174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.252 [2024-10-07 11:31:40.638192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.252 [2024-10-07 11:31:40.638360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.252 [2024-10-07 11:31:40.649012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.252 [2024-10-07 11:31:40.649132] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.252 [2024-10-07 11:31:40.649172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.252 [2024-10-07 11:31:40.649190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.252 [2024-10-07 11:31:40.649222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.252 [2024-10-07 11:31:40.649254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.252 [2024-10-07 11:31:40.649273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.252 [2024-10-07 11:31:40.649287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.252 [2024-10-07 11:31:40.649330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.252 [2024-10-07 11:31:40.659106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.252 [2024-10-07 11:31:40.659223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.252 [2024-10-07 11:31:40.659254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.252 [2024-10-07 11:31:40.659271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.252 [2024-10-07 11:31:40.659303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.252 [2024-10-07 11:31:40.659355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.252 [2024-10-07 11:31:40.659375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.252 [2024-10-07 11:31:40.659389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.252 [2024-10-07 11:31:40.659419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.252 [2024-10-07 11:31:40.669522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.252 [2024-10-07 11:31:40.669647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.252 [2024-10-07 11:31:40.669679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.252 [2024-10-07 11:31:40.669697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.252 [2024-10-07 11:31:40.669749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.252 [2024-10-07 11:31:40.669782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.252 [2024-10-07 11:31:40.669800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.252 [2024-10-07 11:31:40.669815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.252 [2024-10-07 11:31:40.669845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.252 [2024-10-07 11:31:40.679614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.252 [2024-10-07 11:31:40.679740] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.252 [2024-10-07 11:31:40.679772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.252 [2024-10-07 11:31:40.679790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.252 [2024-10-07 11:31:40.679821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.252 [2024-10-07 11:31:40.679854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.252 [2024-10-07 11:31:40.679871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.252 [2024-10-07 11:31:40.679886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.252 [2024-10-07 11:31:40.679924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.252 [2024-10-07 11:31:40.690928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.252 [2024-10-07 11:31:40.692217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.252 [2024-10-07 11:31:40.692263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.252 [2024-10-07 11:31:40.692284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.252 [2024-10-07 11:31:40.692444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.252 [2024-10-07 11:31:40.692486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.252 [2024-10-07 11:31:40.692505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.252 [2024-10-07 11:31:40.692520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.252 [2024-10-07 11:31:40.692551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.252 [2024-10-07 11:31:40.701025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.252 [2024-10-07 11:31:40.701143] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.252 [2024-10-07 11:31:40.701175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.252 [2024-10-07 11:31:40.701193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.252 [2024-10-07 11:31:40.702101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.252 [2024-10-07 11:31:40.702340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.252 [2024-10-07 11:31:40.702368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.252 [2024-10-07 11:31:40.702400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.252 [2024-10-07 11:31:40.702491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.252 [2024-10-07 11:31:40.712041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.252 [2024-10-07 11:31:40.712165] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.252 [2024-10-07 11:31:40.712196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.252 [2024-10-07 11:31:40.712214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.252 [2024-10-07 11:31:40.712246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.252 [2024-10-07 11:31:40.712277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.252 [2024-10-07 11:31:40.712295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.252 [2024-10-07 11:31:40.712309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.252 [2024-10-07 11:31:40.712360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.252 [2024-10-07 11:31:40.723167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.252 [2024-10-07 11:31:40.723293] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.252 [2024-10-07 11:31:40.723339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.252 [2024-10-07 11:31:40.723359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.252 [2024-10-07 11:31:40.723392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.252 [2024-10-07 11:31:40.723424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.252 [2024-10-07 11:31:40.723442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.252 [2024-10-07 11:31:40.723456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.252 [2024-10-07 11:31:40.723486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.252 [2024-10-07 11:31:40.734950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.252 [2024-10-07 11:31:40.735070] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.252 [2024-10-07 11:31:40.735101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.252 [2024-10-07 11:31:40.735119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.252 [2024-10-07 11:31:40.735151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.252 [2024-10-07 11:31:40.735183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.252 [2024-10-07 11:31:40.735202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.252 [2024-10-07 11:31:40.735216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.252 [2024-10-07 11:31:40.735247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.252 [2024-10-07 11:31:40.745048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.252 [2024-10-07 11:31:40.745181] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.253 [2024-10-07 11:31:40.745213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.253 [2024-10-07 11:31:40.745231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.253 [2024-10-07 11:31:40.745263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.253 [2024-10-07 11:31:40.745309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.253 [2024-10-07 11:31:40.745348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.253 [2024-10-07 11:31:40.745363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.253 [2024-10-07 11:31:40.745395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.253 [2024-10-07 11:31:40.755446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.253 [2024-10-07 11:31:40.755578] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.253 [2024-10-07 11:31:40.755611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.253 [2024-10-07 11:31:40.755629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.253 [2024-10-07 11:31:40.755662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.253 [2024-10-07 11:31:40.755694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.253 [2024-10-07 11:31:40.755713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.253 [2024-10-07 11:31:40.755727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.253 [2024-10-07 11:31:40.755757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.253 [2024-10-07 11:31:40.765550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.253 [2024-10-07 11:31:40.765683] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.253 [2024-10-07 11:31:40.765722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.253 [2024-10-07 11:31:40.765746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.253 [2024-10-07 11:31:40.765778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.253 [2024-10-07 11:31:40.765819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.253 [2024-10-07 11:31:40.765837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.253 [2024-10-07 11:31:40.765853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.253 [2024-10-07 11:31:40.766044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.253 [2024-10-07 11:31:40.777101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.253 [2024-10-07 11:31:40.777223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.253 [2024-10-07 11:31:40.777255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.253 [2024-10-07 11:31:40.777272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.253 [2024-10-07 11:31:40.777305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.253 [2024-10-07 11:31:40.777382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.253 [2024-10-07 11:31:40.777428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.253 [2024-10-07 11:31:40.777444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.253 [2024-10-07 11:31:40.777478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.253 [2024-10-07 11:31:40.787201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.253 [2024-10-07 11:31:40.787341] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.253 [2024-10-07 11:31:40.787382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.253 [2024-10-07 11:31:40.787399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.253 [2024-10-07 11:31:40.787432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.253 [2024-10-07 11:31:40.787465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.253 [2024-10-07 11:31:40.787482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.253 [2024-10-07 11:31:40.787496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.253 [2024-10-07 11:31:40.787526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.253 [2024-10-07 11:31:40.797728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.253 [2024-10-07 11:31:40.797858] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.253 [2024-10-07 11:31:40.797891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.253 [2024-10-07 11:31:40.797909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.253 [2024-10-07 11:31:40.797942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.253 [2024-10-07 11:31:40.797974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.253 [2024-10-07 11:31:40.797992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.253 [2024-10-07 11:31:40.798007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.253 [2024-10-07 11:31:40.798037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.253 [2024-10-07 11:31:40.807825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.253 [2024-10-07 11:31:40.807943] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.253 [2024-10-07 11:31:40.807976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.253 [2024-10-07 11:31:40.807996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.253 [2024-10-07 11:31:40.808194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.253 [2024-10-07 11:31:40.808351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.253 [2024-10-07 11:31:40.808382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.253 [2024-10-07 11:31:40.808399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.253 [2024-10-07 11:31:40.808536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.253 [2024-10-07 11:31:40.819133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.253 [2024-10-07 11:31:40.819256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.253 [2024-10-07 11:31:40.819289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.253 [2024-10-07 11:31:40.819310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.253 [2024-10-07 11:31:40.819359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.253 [2024-10-07 11:31:40.819392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.253 [2024-10-07 11:31:40.819411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.253 [2024-10-07 11:31:40.819425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.253 [2024-10-07 11:31:40.819455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.253 [2024-10-07 11:31:40.829231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.253 [2024-10-07 11:31:40.829365] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.253 [2024-10-07 11:31:40.829398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.253 [2024-10-07 11:31:40.829416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.253 [2024-10-07 11:31:40.829449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.253 [2024-10-07 11:31:40.829482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.253 [2024-10-07 11:31:40.829501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.253 [2024-10-07 11:31:40.829515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.253 [2024-10-07 11:31:40.829546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.253 [2024-10-07 11:31:40.839741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.253 [2024-10-07 11:31:40.839869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.253 [2024-10-07 11:31:40.839901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.253 [2024-10-07 11:31:40.839918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.253 [2024-10-07 11:31:40.839951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.253 [2024-10-07 11:31:40.839998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.253 [2024-10-07 11:31:40.840019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.253 [2024-10-07 11:31:40.840034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.253 [2024-10-07 11:31:40.840065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.253 [2024-10-07 11:31:40.849839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.253 [2024-10-07 11:31:40.849959] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.253 [2024-10-07 11:31:40.849991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.253 [2024-10-07 11:31:40.850027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.253 [2024-10-07 11:31:40.850219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.253 [2024-10-07 11:31:40.850395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.253 [2024-10-07 11:31:40.850432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.253 [2024-10-07 11:31:40.850449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.253 [2024-10-07 11:31:40.850570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.253 [2024-10-07 11:31:40.860993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.254 [2024-10-07 11:31:40.861111] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.254 [2024-10-07 11:31:40.861143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.254 [2024-10-07 11:31:40.861160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.254 [2024-10-07 11:31:40.861192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.254 [2024-10-07 11:31:40.861225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.254 [2024-10-07 11:31:40.861243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.254 [2024-10-07 11:31:40.861256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.254 [2024-10-07 11:31:40.861286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.254 [2024-10-07 11:31:40.871088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.254 [2024-10-07 11:31:40.871205] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.254 [2024-10-07 11:31:40.871236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.254 [2024-10-07 11:31:40.871254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.254 [2024-10-07 11:31:40.871286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.254 [2024-10-07 11:31:40.871332] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.254 [2024-10-07 11:31:40.871353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.254 [2024-10-07 11:31:40.871368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.254 [2024-10-07 11:31:40.872567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.254 [2024-10-07 11:31:40.881364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.254 [2024-10-07 11:31:40.881501] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.254 [2024-10-07 11:31:40.881535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.254 [2024-10-07 11:31:40.881552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.254 [2024-10-07 11:31:40.881585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.254 [2024-10-07 11:31:40.881617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.254 [2024-10-07 11:31:40.881650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.254 [2024-10-07 11:31:40.881666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.254 [2024-10-07 11:31:40.881699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.254 [2024-10-07 11:31:40.891471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.254 [2024-10-07 11:31:40.891747] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.254 [2024-10-07 11:31:40.891792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.254 [2024-10-07 11:31:40.891811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.254 [2024-10-07 11:31:40.891944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.254 [2024-10-07 11:31:40.892070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.254 [2024-10-07 11:31:40.892105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.254 [2024-10-07 11:31:40.892122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.254 [2024-10-07 11:31:40.892179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.254 [2024-10-07 11:31:40.902533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.254 [2024-10-07 11:31:40.902653] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.254 [2024-10-07 11:31:40.902686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.254 [2024-10-07 11:31:40.902703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.254 [2024-10-07 11:31:40.902734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.254 [2024-10-07 11:31:40.902767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.254 [2024-10-07 11:31:40.902785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.254 [2024-10-07 11:31:40.902799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.254 [2024-10-07 11:31:40.902830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.254 [2024-10-07 11:31:40.912628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.254 [2024-10-07 11:31:40.912750] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.254 [2024-10-07 11:31:40.912783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.254 [2024-10-07 11:31:40.912800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.254 [2024-10-07 11:31:40.914007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.254 [2024-10-07 11:31:40.914243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.254 [2024-10-07 11:31:40.914280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.254 [2024-10-07 11:31:40.914309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.254 [2024-10-07 11:31:40.915129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.254 [2024-10-07 11:31:40.922722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.254 [2024-10-07 11:31:40.922839] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.254 [2024-10-07 11:31:40.922871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.254 [2024-10-07 11:31:40.922888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.254 [2024-10-07 11:31:40.922919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.254 [2024-10-07 11:31:40.922951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.254 [2024-10-07 11:31:40.922969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.254 [2024-10-07 11:31:40.922983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.254 [2024-10-07 11:31:40.923013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.254 [2024-10-07 11:31:40.933135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.254 [2024-10-07 11:31:40.933274] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.254 [2024-10-07 11:31:40.933313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.254 [2024-10-07 11:31:40.933348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.254 [2024-10-07 11:31:40.933382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.254 [2024-10-07 11:31:40.933414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.254 [2024-10-07 11:31:40.933432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.254 [2024-10-07 11:31:40.933445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.254 [2024-10-07 11:31:40.933476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.254 [2024-10-07 11:31:40.943761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.254 [2024-10-07 11:31:40.943892] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.254 [2024-10-07 11:31:40.943933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.254 [2024-10-07 11:31:40.943951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.254 [2024-10-07 11:31:40.943984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.254 [2024-10-07 11:31:40.944026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.254 [2024-10-07 11:31:40.944045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.254 [2024-10-07 11:31:40.944059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.254 [2024-10-07 11:31:40.944090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.254 [2024-10-07 11:31:40.953868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.254 [2024-10-07 11:31:40.953992] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.254 [2024-10-07 11:31:40.954024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.254 [2024-10-07 11:31:40.954042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.254 [2024-10-07 11:31:40.954095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.254 [2024-10-07 11:31:40.954128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.254 [2024-10-07 11:31:40.954146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.254 [2024-10-07 11:31:40.954165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.254 [2024-10-07 11:31:40.954197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.254 [2024-10-07 11:31:40.964282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.254 [2024-10-07 11:31:40.964424] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.254 [2024-10-07 11:31:40.964457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.254 [2024-10-07 11:31:40.964476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.254 [2024-10-07 11:31:40.964508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.254 [2024-10-07 11:31:40.964540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.254 [2024-10-07 11:31:40.964558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.254 [2024-10-07 11:31:40.964573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.254 [2024-10-07 11:31:40.964604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.255 [2024-10-07 11:31:40.974393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.255 [2024-10-07 11:31:40.974513] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.255 [2024-10-07 11:31:40.974546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.255 [2024-10-07 11:31:40.974565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.255 [2024-10-07 11:31:40.974752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.255 [2024-10-07 11:31:40.974896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.255 [2024-10-07 11:31:40.974922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.255 [2024-10-07 11:31:40.974937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.255 [2024-10-07 11:31:40.975053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.255 [2024-10-07 11:31:40.985652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.255 [2024-10-07 11:31:40.985769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.255 [2024-10-07 11:31:40.985801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.255 [2024-10-07 11:31:40.985819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.255 [2024-10-07 11:31:40.985852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.255 [2024-10-07 11:31:40.985884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.255 [2024-10-07 11:31:40.985902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.255 [2024-10-07 11:31:40.985936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.255 [2024-10-07 11:31:40.985970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.255 [2024-10-07 11:31:40.995741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.255 [2024-10-07 11:31:40.995867] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.255 [2024-10-07 11:31:40.995899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.255 [2024-10-07 11:31:40.995916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.255 [2024-10-07 11:31:40.995948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.255 [2024-10-07 11:31:40.995979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.255 [2024-10-07 11:31:40.995997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.255 [2024-10-07 11:31:40.996012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.255 [2024-10-07 11:31:40.996042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.255 [2024-10-07 11:31:41.006170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.255 [2024-10-07 11:31:41.006307] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.255 [2024-10-07 11:31:41.006354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.255 [2024-10-07 11:31:41.006373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.255 [2024-10-07 11:31:41.006407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.255 [2024-10-07 11:31:41.006459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.255 [2024-10-07 11:31:41.006481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.255 [2024-10-07 11:31:41.006496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.255 [2024-10-07 11:31:41.006527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.255 [2024-10-07 11:31:41.016265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.255 [2024-10-07 11:31:41.016398] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.255 [2024-10-07 11:31:41.016431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.255 [2024-10-07 11:31:41.016449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.255 [2024-10-07 11:31:41.016481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.255 [2024-10-07 11:31:41.016668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.255 [2024-10-07 11:31:41.016696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.255 [2024-10-07 11:31:41.016711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.255 [2024-10-07 11:31:41.016846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.255 [2024-10-07 11:31:41.027665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.255 [2024-10-07 11:31:41.027830] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.255 [2024-10-07 11:31:41.027865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.255 [2024-10-07 11:31:41.027882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.255 [2024-10-07 11:31:41.027916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.255 [2024-10-07 11:31:41.027956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.255 [2024-10-07 11:31:41.027974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.255 [2024-10-07 11:31:41.027989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.255 [2024-10-07 11:31:41.028042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.255 [2024-10-07 11:31:41.037790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.255 [2024-10-07 11:31:41.037906] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.255 [2024-10-07 11:31:41.037949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.255 [2024-10-07 11:31:41.037967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.255 [2024-10-07 11:31:41.037999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.255 [2024-10-07 11:31:41.038031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.255 [2024-10-07 11:31:41.038054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.255 [2024-10-07 11:31:41.038068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.255 [2024-10-07 11:31:41.038098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.255 [2024-10-07 11:31:41.048864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.255 [2024-10-07 11:31:41.050648] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.255 [2024-10-07 11:31:41.050715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.255 [2024-10-07 11:31:41.050760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.255 [2024-10-07 11:31:41.052135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.255 [2024-10-07 11:31:41.052530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.255 [2024-10-07 11:31:41.052588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.255 [2024-10-07 11:31:41.052617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.255 [2024-10-07 11:31:41.052767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.255 [2024-10-07 11:31:41.059285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.255 [2024-10-07 11:31:41.059451] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.255 [2024-10-07 11:31:41.059487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.255 [2024-10-07 11:31:41.059505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.255 [2024-10-07 11:31:41.059542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.255 [2024-10-07 11:31:41.059605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.255 [2024-10-07 11:31:41.059625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.255 [2024-10-07 11:31:41.059639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.255 [2024-10-07 11:31:41.059675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.255 [2024-10-07 11:31:41.072829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.255 [2024-10-07 11:31:41.073116] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.255 [2024-10-07 11:31:41.073162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.255 [2024-10-07 11:31:41.073182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.256 [2024-10-07 11:31:41.073399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.256 [2024-10-07 11:31:41.073585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.256 [2024-10-07 11:31:41.073622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.256 [2024-10-07 11:31:41.073640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.256 [2024-10-07 11:31:41.073768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.256 [2024-10-07 11:31:41.084269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.256 [2024-10-07 11:31:41.084407] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.256 [2024-10-07 11:31:41.084441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.256 [2024-10-07 11:31:41.084459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.256 [2024-10-07 11:31:41.084495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.256 [2024-10-07 11:31:41.084531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.256 [2024-10-07 11:31:41.084550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.256 [2024-10-07 11:31:41.084564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.256 [2024-10-07 11:31:41.084599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.256 [2024-10-07 11:31:41.094390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.256 [2024-10-07 11:31:41.094522] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.256 [2024-10-07 11:31:41.094566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.256 [2024-10-07 11:31:41.094585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.256 [2024-10-07 11:31:41.094622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.256 [2024-10-07 11:31:41.094659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.256 [2024-10-07 11:31:41.094677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.256 [2024-10-07 11:31:41.094692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.256 [2024-10-07 11:31:41.094752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.256 [2024-10-07 11:31:41.104947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.256 [2024-10-07 11:31:41.105088] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.256 [2024-10-07 11:31:41.105120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.256 [2024-10-07 11:31:41.105138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.256 [2024-10-07 11:31:41.105176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.256 [2024-10-07 11:31:41.105212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.256 [2024-10-07 11:31:41.105231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.256 [2024-10-07 11:31:41.105245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.256 [2024-10-07 11:31:41.105280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.256 [2024-10-07 11:31:41.115059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.256 [2024-10-07 11:31:41.115187] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.256 [2024-10-07 11:31:41.115220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.256 [2024-10-07 11:31:41.115238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.256 [2024-10-07 11:31:41.115453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.256 [2024-10-07 11:31:41.115601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.256 [2024-10-07 11:31:41.115646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.256 [2024-10-07 11:31:41.115664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.256 [2024-10-07 11:31:41.115788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.256 [2024-10-07 11:31:41.126383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.256 [2024-10-07 11:31:41.126527] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.256 [2024-10-07 11:31:41.126560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.256 [2024-10-07 11:31:41.126579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.256 [2024-10-07 11:31:41.126616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.256 [2024-10-07 11:31:41.126653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.256 [2024-10-07 11:31:41.126672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.256 [2024-10-07 11:31:41.126688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.256 [2024-10-07 11:31:41.126723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.256 [2024-10-07 11:31:41.136502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.256 [2024-10-07 11:31:41.136638] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.256 [2024-10-07 11:31:41.136670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.256 [2024-10-07 11:31:41.136718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.256 [2024-10-07 11:31:41.136759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.256 [2024-10-07 11:31:41.136796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.256 [2024-10-07 11:31:41.136814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.256 [2024-10-07 11:31:41.136829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.256 [2024-10-07 11:31:41.136863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.256 [2024-10-07 11:31:41.147015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.256 [2024-10-07 11:31:41.147170] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.256 [2024-10-07 11:31:41.147204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.256 [2024-10-07 11:31:41.147222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.256 [2024-10-07 11:31:41.147261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.256 [2024-10-07 11:31:41.147331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.256 [2024-10-07 11:31:41.147355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.256 [2024-10-07 11:31:41.147371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.256 [2024-10-07 11:31:41.147407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.256 [2024-10-07 11:31:41.157138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.256 [2024-10-07 11:31:41.157279] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.256 [2024-10-07 11:31:41.157313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.256 [2024-10-07 11:31:41.157349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.256 [2024-10-07 11:31:41.157545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.256 [2024-10-07 11:31:41.157693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.256 [2024-10-07 11:31:41.157730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.256 [2024-10-07 11:31:41.157750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.256 [2024-10-07 11:31:41.157873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.256 [2024-10-07 11:31:41.168410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.256 [2024-10-07 11:31:41.168534] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.256 [2024-10-07 11:31:41.168566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.256 [2024-10-07 11:31:41.168584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.256 [2024-10-07 11:31:41.168621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.256 [2024-10-07 11:31:41.168657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.256 [2024-10-07 11:31:41.168695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.256 [2024-10-07 11:31:41.168711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.256 [2024-10-07 11:31:41.168747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.256 [2024-10-07 11:31:41.178516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.256 [2024-10-07 11:31:41.178665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.256 [2024-10-07 11:31:41.178699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.256 [2024-10-07 11:31:41.178717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.256 [2024-10-07 11:31:41.178755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.256 [2024-10-07 11:31:41.178793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.256 [2024-10-07 11:31:41.178812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.256 [2024-10-07 11:31:41.178827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.256 [2024-10-07 11:31:41.178863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.256 [2024-10-07 11:31:41.189182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.256 [2024-10-07 11:31:41.189356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.257 [2024-10-07 11:31:41.189391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.257 [2024-10-07 11:31:41.189409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.257 [2024-10-07 11:31:41.189449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.257 [2024-10-07 11:31:41.189487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.257 [2024-10-07 11:31:41.189506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.257 [2024-10-07 11:31:41.189529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.257 [2024-10-07 11:31:41.189564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.257 [2024-10-07 11:31:41.199306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.257 [2024-10-07 11:31:41.199467] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.257 [2024-10-07 11:31:41.199500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.257 [2024-10-07 11:31:41.199519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.257 [2024-10-07 11:31:41.199573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.257 [2024-10-07 11:31:41.199614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.257 [2024-10-07 11:31:41.199633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.257 [2024-10-07 11:31:41.199648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.257 [2024-10-07 11:31:41.199840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.257 [2024-10-07 11:31:41.210759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.257 [2024-10-07 11:31:41.210883] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.257 [2024-10-07 11:31:41.210915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.257 [2024-10-07 11:31:41.210933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.257 [2024-10-07 11:31:41.210969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.257 [2024-10-07 11:31:41.211006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.257 [2024-10-07 11:31:41.211024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.257 [2024-10-07 11:31:41.211038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.257 [2024-10-07 11:31:41.211073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.257 [2024-10-07 11:31:41.220870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.257 [2024-10-07 11:31:41.220990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.257 [2024-10-07 11:31:41.221023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.257 [2024-10-07 11:31:41.221040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.257 [2024-10-07 11:31:41.221078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.257 [2024-10-07 11:31:41.221113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.257 [2024-10-07 11:31:41.221132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.257 [2024-10-07 11:31:41.221146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.257 [2024-10-07 11:31:41.221198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.257 [2024-10-07 11:31:41.231386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.257 [2024-10-07 11:31:41.231515] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.257 [2024-10-07 11:31:41.231547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.257 [2024-10-07 11:31:41.231564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.257 [2024-10-07 11:31:41.231601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.257 [2024-10-07 11:31:41.231637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.257 [2024-10-07 11:31:41.231656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.257 [2024-10-07 11:31:41.231670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.257 [2024-10-07 11:31:41.231705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.257 [2024-10-07 11:31:41.241491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.257 [2024-10-07 11:31:41.241611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.257 [2024-10-07 11:31:41.241643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.257 [2024-10-07 11:31:41.241660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.257 [2024-10-07 11:31:41.241875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.257 [2024-10-07 11:31:41.242025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.257 [2024-10-07 11:31:41.242063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.257 [2024-10-07 11:31:41.242080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.257 [2024-10-07 11:31:41.242203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.257 [2024-10-07 11:31:41.253177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.257 [2024-10-07 11:31:41.253301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.257 [2024-10-07 11:31:41.253347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.257 [2024-10-07 11:31:41.253366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.257 [2024-10-07 11:31:41.253404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.257 [2024-10-07 11:31:41.253441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.257 [2024-10-07 11:31:41.253459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.257 [2024-10-07 11:31:41.253473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.257 [2024-10-07 11:31:41.253508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.257 [2024-10-07 11:31:41.263285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.257 [2024-10-07 11:31:41.263408] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.257 [2024-10-07 11:31:41.263440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.257 [2024-10-07 11:31:41.263457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.257 [2024-10-07 11:31:41.263493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.257 [2024-10-07 11:31:41.263530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.257 [2024-10-07 11:31:41.263548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.257 [2024-10-07 11:31:41.263562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.257 [2024-10-07 11:31:41.263596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.257 [2024-10-07 11:31:41.273748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.257 [2024-10-07 11:31:41.273881] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.257 [2024-10-07 11:31:41.273913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.257 [2024-10-07 11:31:41.273931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.257 [2024-10-07 11:31:41.273975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.257 [2024-10-07 11:31:41.274013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.257 [2024-10-07 11:31:41.274031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.257 [2024-10-07 11:31:41.274067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.257 [2024-10-07 11:31:41.274105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.257 [2024-10-07 11:31:41.284548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.257 [2024-10-07 11:31:41.284671] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.257 [2024-10-07 11:31:41.284703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.257 [2024-10-07 11:31:41.284722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.257 [2024-10-07 11:31:41.284766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.257 [2024-10-07 11:31:41.284803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.257 [2024-10-07 11:31:41.284822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.257 [2024-10-07 11:31:41.284837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.257 [2024-10-07 11:31:41.284872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.257 [2024-10-07 11:31:41.296028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.257 [2024-10-07 11:31:41.297341] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.257 [2024-10-07 11:31:41.297387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.257 [2024-10-07 11:31:41.297407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.257 [2024-10-07 11:31:41.297570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.257 [2024-10-07 11:31:41.297631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.257 [2024-10-07 11:31:41.297652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.257 [2024-10-07 11:31:41.297667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.257 [2024-10-07 11:31:41.297731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.258 [2024-10-07 11:31:41.306364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.258 [2024-10-07 11:31:41.306494] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.258 [2024-10-07 11:31:41.306526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.258 [2024-10-07 11:31:41.306544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.258 [2024-10-07 11:31:41.306581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.258 [2024-10-07 11:31:41.307485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.258 [2024-10-07 11:31:41.307523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.258 [2024-10-07 11:31:41.307541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.258 [2024-10-07 11:31:41.307736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.258 [2024-10-07 11:31:41.317522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.258 [2024-10-07 11:31:41.317661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.258 [2024-10-07 11:31:41.317693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.258 [2024-10-07 11:31:41.317711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.258 [2024-10-07 11:31:41.317747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.258 [2024-10-07 11:31:41.317794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.258 [2024-10-07 11:31:41.317815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.258 [2024-10-07 11:31:41.317830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.258 [2024-10-07 11:31:41.317864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.258 [2024-10-07 11:31:41.328698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.258 [2024-10-07 11:31:41.328818] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.258 [2024-10-07 11:31:41.328849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.258 [2024-10-07 11:31:41.328867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.258 [2024-10-07 11:31:41.328904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.258 [2024-10-07 11:31:41.328940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.258 [2024-10-07 11:31:41.328958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.258 [2024-10-07 11:31:41.328973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.258 [2024-10-07 11:31:41.329017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.258 [2024-10-07 11:31:41.340799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.258 [2024-10-07 11:31:41.340956] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.258 [2024-10-07 11:31:41.340988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.258 [2024-10-07 11:31:41.341009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.258 [2024-10-07 11:31:41.341045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.258 [2024-10-07 11:31:41.341082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.258 [2024-10-07 11:31:41.341100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.258 [2024-10-07 11:31:41.341114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.258 [2024-10-07 11:31:41.341149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.258 [2024-10-07 11:31:41.351117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.258 [2024-10-07 11:31:41.351236] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.258 [2024-10-07 11:31:41.351268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.258 [2024-10-07 11:31:41.351286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.258 [2024-10-07 11:31:41.351340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.258 [2024-10-07 11:31:41.351395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.258 [2024-10-07 11:31:41.351415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.258 [2024-10-07 11:31:41.351429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.258 [2024-10-07 11:31:41.351466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.258 [2024-10-07 11:31:41.361220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.258 [2024-10-07 11:31:41.361352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.258 [2024-10-07 11:31:41.361385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.258 [2024-10-07 11:31:41.361403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.258 [2024-10-07 11:31:41.361977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.258 [2024-10-07 11:31:41.362165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.258 [2024-10-07 11:31:41.362199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.258 [2024-10-07 11:31:41.362216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.258 [2024-10-07 11:31:41.362356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.258 [2024-10-07 11:31:41.371640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.258 [2024-10-07 11:31:41.371788] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.258 [2024-10-07 11:31:41.371821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.258 [2024-10-07 11:31:41.371838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.258 [2024-10-07 11:31:41.371874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.258 [2024-10-07 11:31:41.371910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.258 [2024-10-07 11:31:41.371929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.258 [2024-10-07 11:31:41.371943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.258 [2024-10-07 11:31:41.371978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.258 [2024-10-07 11:31:41.382147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.258 [2024-10-07 11:31:41.382204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.258 [2024-10-07 11:31:41.382233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.258 [2024-10-07 11:31:41.382249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.258 [2024-10-07 11:31:41.382266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.258 [2024-10-07 11:31:41.382281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.258 [2024-10-07 11:31:41.382310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.258 [2024-10-07 11:31:41.382357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.258 [2024-10-07 11:31:41.382376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.258 [2024-10-07 11:31:41.382391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.258 [2024-10-07 11:31:41.382407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.258 [2024-10-07 11:31:41.382421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.258 [2024-10-07 11:31:41.382438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.258 [2024-10-07 11:31:41.382452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.258 [2024-10-07 11:31:41.382469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.258 [2024-10-07 11:31:41.382483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.258 [2024-10-07 11:31:41.382499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.258 [2024-10-07 11:31:41.382514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.258 [2024-10-07 11:31:41.382530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.258 [2024-10-07 11:31:41.382545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.258 [2024-10-07 11:31:41.382561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.258 [2024-10-07 11:31:41.382576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.258 [2024-10-07 11:31:41.382592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.258 [2024-10-07 11:31:41.382607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.258 [2024-10-07 11:31:41.382623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.258 [2024-10-07 11:31:41.382639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.258 [2024-10-07 11:31:41.382656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.258 [2024-10-07 11:31:41.382671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.258 [2024-10-07 11:31:41.382687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.258 [2024-10-07 11:31:41.382701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.258 [2024-10-07 11:31:41.382717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.258 [2024-10-07 11:31:41.382731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.258 [2024-10-07 11:31:41.382757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.258 [2024-10-07 11:31:41.382772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.382788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.259 [2024-10-07 11:31:41.382803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.382819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.259 [2024-10-07 11:31:41.382836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.382854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.259 [2024-10-07 11:31:41.382868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.382885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.259 [2024-10-07 11:31:41.382899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.382915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.259 [2024-10-07 11:31:41.382929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.382946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.382960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.382976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.382991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.259 [2024-10-07 11:31:41.383741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.259 [2024-10-07 11:31:41.383772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.259 [2024-10-07 11:31:41.383803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.259 [2024-10-07 11:31:41.383834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.259 [2024-10-07 11:31:41.383864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.259 [2024-10-07 11:31:41.383894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.259 [2024-10-07 11:31:41.383925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.259 [2024-10-07 11:31:41.383956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.383979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.383994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.384010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.384025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.384042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.384056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.384073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.259 [2024-10-07 11:31:41.384087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.259 [2024-10-07 11:31:41.384104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.260 [2024-10-07 11:31:41.384517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.260 [2024-10-07 11:31:41.384548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.260 [2024-10-07 11:31:41.384585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.260 [2024-10-07 11:31:41.384615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.260 [2024-10-07 11:31:41.384646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.260 [2024-10-07 11:31:41.384677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.260 [2024-10-07 11:31:41.384708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.260 [2024-10-07 11:31:41.384739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.384983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.384998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.385014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.385029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.385045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.385060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.385077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.385091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.385108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.385122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.385139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.385153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.385170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.385191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.385208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.385223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.385239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.385254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.385270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.385285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.385301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.385327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.385347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.385362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.385378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.260 [2024-10-07 11:31:41.385393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.385410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.260 [2024-10-07 11:31:41.385424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.385441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.260 [2024-10-07 11:31:41.385455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.385471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.260 [2024-10-07 11:31:41.385486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.385503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.260 [2024-10-07 11:31:41.385517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.385534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.260 [2024-10-07 11:31:41.385548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.260 [2024-10-07 11:31:41.385564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.260 [2024-10-07 11:31:41.385579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.385595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.261 [2024-10-07 11:31:41.385616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.385633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.261 [2024-10-07 11:31:41.385648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.385664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.261 [2024-10-07 11:31:41.385679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.385695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.261 [2024-10-07 11:31:41.385709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.385726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.261 [2024-10-07 11:31:41.385741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.385757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.261 [2024-10-07 11:31:41.385772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.385788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.261 [2024-10-07 11:31:41.385803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.385825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.261 [2024-10-07 11:31:41.385839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.385856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.261 [2024-10-07 11:31:41.385870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.385886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe35020 is same with the state(6) to be set 00:20:57.261 [2024-10-07 11:31:41.385903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.261 [2024-10-07 11:31:41.385915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.261 [2024-10-07 11:31:41.385926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76544 len:8 PRP1 0x0 PRP2 0x0 00:20:57.261 [2024-10-07 11:31:41.385940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.385956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.261 [2024-10-07 11:31:41.385966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.261 [2024-10-07 11:31:41.385978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77096 len:8 PRP1 0x0 PRP2 0x0 00:20:57.261 [2024-10-07 11:31:41.385992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.386013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.261 [2024-10-07 11:31:41.386025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.261 [2024-10-07 11:31:41.386036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77104 len:8 PRP1 0x0 PRP2 0x0 00:20:57.261 [2024-10-07 11:31:41.386050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.386064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.261 [2024-10-07 11:31:41.386075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.261 [2024-10-07 11:31:41.386086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77112 len:8 PRP1 0x0 PRP2 0x0 00:20:57.261 [2024-10-07 11:31:41.386100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.386114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.261 [2024-10-07 11:31:41.386125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.261 [2024-10-07 11:31:41.386135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77120 len:8 PRP1 0x0 PRP2 0x0 00:20:57.261 [2024-10-07 11:31:41.386149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.386164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.261 [2024-10-07 11:31:41.386174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.261 [2024-10-07 11:31:41.386185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77128 len:8 PRP1 0x0 PRP2 0x0 00:20:57.261 [2024-10-07 11:31:41.386199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.386214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.261 [2024-10-07 11:31:41.386225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.261 [2024-10-07 11:31:41.386236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77136 len:8 PRP1 0x0 PRP2 0x0 00:20:57.261 [2024-10-07 11:31:41.386250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.386264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.261 [2024-10-07 11:31:41.386283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.261 [2024-10-07 11:31:41.386306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77144 len:8 PRP1 0x0 PRP2 0x0 00:20:57.261 [2024-10-07 11:31:41.386341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.386358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.261 [2024-10-07 11:31:41.386369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.261 [2024-10-07 11:31:41.386379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77152 len:8 PRP1 0x0 PRP2 0x0 00:20:57.261 [2024-10-07 11:31:41.386393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.386408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.261 [2024-10-07 11:31:41.386419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.261 [2024-10-07 11:31:41.386430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77160 len:8 PRP1 0x0 PRP2 0x0 00:20:57.261 [2024-10-07 11:31:41.386451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.386467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.261 [2024-10-07 11:31:41.386478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.261 [2024-10-07 11:31:41.386488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77168 len:8 PRP1 0x0 PRP2 0x0 00:20:57.261 [2024-10-07 11:31:41.386502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.386517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.261 [2024-10-07 11:31:41.386527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.261 [2024-10-07 11:31:41.386538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77176 len:8 PRP1 0x0 PRP2 0x0 00:20:57.261 [2024-10-07 11:31:41.386552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.386567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.261 [2024-10-07 11:31:41.386577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.261 [2024-10-07 11:31:41.386588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77184 len:8 PRP1 0x0 PRP2 0x0 00:20:57.261 [2024-10-07 11:31:41.386602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.386616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.261 [2024-10-07 11:31:41.386627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.261 [2024-10-07 11:31:41.386637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77192 len:8 PRP1 0x0 PRP2 0x0 00:20:57.261 [2024-10-07 11:31:41.386660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.386676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.261 [2024-10-07 11:31:41.386687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.261 [2024-10-07 11:31:41.386697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77200 len:8 PRP1 0x0 PRP2 0x0 00:20:57.261 [2024-10-07 11:31:41.386711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.261 [2024-10-07 11:31:41.386772] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe35020 was disconnected and freed. reset controller. 00:20:57.261 [2024-10-07 11:31:41.387886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.261 [2024-10-07 11:31:41.387973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.261 [2024-10-07 11:31:41.388136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.261 [2024-10-07 11:31:41.388488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.261 [2024-10-07 11:31:41.388523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.261 [2024-10-07 11:31:41.388542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.261 [2024-10-07 11:31:41.388595] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.261 [2024-10-07 11:31:41.388619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.261 [2024-10-07 11:31:41.388635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.261 [2024-10-07 11:31:41.388733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.261 [2024-10-07 11:31:41.388762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.261 [2024-10-07 11:31:41.388791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.261 [2024-10-07 11:31:41.388810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.261 [2024-10-07 11:31:41.388825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.261 [2024-10-07 11:31:41.388843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.261 [2024-10-07 11:31:41.388857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.261 [2024-10-07 11:31:41.388870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.261 [2024-10-07 11:31:41.388902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.261 [2024-10-07 11:31:41.388919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.261 [2024-10-07 11:31:41.398523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.261 [2024-10-07 11:31:41.398575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.261 [2024-10-07 11:31:41.398682] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.261 [2024-10-07 11:31:41.398721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.261 [2024-10-07 11:31:41.398738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.261 [2024-10-07 11:31:41.398787] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.262 [2024-10-07 11:31:41.398810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.262 [2024-10-07 11:31:41.398826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.262 [2024-10-07 11:31:41.398858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.262 [2024-10-07 11:31:41.398882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.262 [2024-10-07 11:31:41.398909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.262 [2024-10-07 11:31:41.398928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.262 [2024-10-07 11:31:41.398942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.262 [2024-10-07 11:31:41.398958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.262 [2024-10-07 11:31:41.398972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.262 [2024-10-07 11:31:41.398985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.262 [2024-10-07 11:31:41.399032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.262 [2024-10-07 11:31:41.399053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.262 [2024-10-07 11:31:41.408653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.262 [2024-10-07 11:31:41.408726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.262 [2024-10-07 11:31:41.408827] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.262 [2024-10-07 11:31:41.408856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.262 [2024-10-07 11:31:41.408873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.262 [2024-10-07 11:31:41.408941] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.262 [2024-10-07 11:31:41.408968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.262 [2024-10-07 11:31:41.408985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.262 [2024-10-07 11:31:41.409003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.262 [2024-10-07 11:31:41.409035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.262 [2024-10-07 11:31:41.409057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.262 [2024-10-07 11:31:41.409071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.262 [2024-10-07 11:31:41.409085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.262 [2024-10-07 11:31:41.409270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.262 [2024-10-07 11:31:41.409307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.262 [2024-10-07 11:31:41.409339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.262 [2024-10-07 11:31:41.409355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.262 [2024-10-07 11:31:41.409490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.262 [2024-10-07 11:31:41.420193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.262 [2024-10-07 11:31:41.420244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.262 [2024-10-07 11:31:41.420389] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.262 [2024-10-07 11:31:41.420422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.262 [2024-10-07 11:31:41.420440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.262 [2024-10-07 11:31:41.420489] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.262 [2024-10-07 11:31:41.420513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.262 [2024-10-07 11:31:41.420529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.262 [2024-10-07 11:31:41.420562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.262 [2024-10-07 11:31:41.420587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.262 [2024-10-07 11:31:41.420631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.262 [2024-10-07 11:31:41.420653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.262 [2024-10-07 11:31:41.420667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.262 [2024-10-07 11:31:41.420684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.262 [2024-10-07 11:31:41.420712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.262 [2024-10-07 11:31:41.420727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.262 [2024-10-07 11:31:41.420758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.262 [2024-10-07 11:31:41.420776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.262 [2024-10-07 11:31:41.430347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.262 [2024-10-07 11:31:41.430419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.262 [2024-10-07 11:31:41.430498] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.262 [2024-10-07 11:31:41.430527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.262 [2024-10-07 11:31:41.430544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.262 [2024-10-07 11:31:41.430617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.262 [2024-10-07 11:31:41.430643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.262 [2024-10-07 11:31:41.430659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.262 [2024-10-07 11:31:41.430678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.262 [2024-10-07 11:31:41.430711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.262 [2024-10-07 11:31:41.430732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.262 [2024-10-07 11:31:41.430746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.262 [2024-10-07 11:31:41.430760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.262 [2024-10-07 11:31:41.430790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.262 [2024-10-07 11:31:41.430808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.262 [2024-10-07 11:31:41.430822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.262 [2024-10-07 11:31:41.430835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.262 [2024-10-07 11:31:41.430862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.263 [2024-10-07 11:31:41.441176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.263 [2024-10-07 11:31:41.441224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.263 [2024-10-07 11:31:41.441343] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.263 [2024-10-07 11:31:41.441375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.263 [2024-10-07 11:31:41.441392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.263 [2024-10-07 11:31:41.441442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.263 [2024-10-07 11:31:41.441465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.263 [2024-10-07 11:31:41.441481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.263 [2024-10-07 11:31:41.441513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.263 [2024-10-07 11:31:41.441552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.263 [2024-10-07 11:31:41.441581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.263 [2024-10-07 11:31:41.441600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.263 [2024-10-07 11:31:41.441614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.263 [2024-10-07 11:31:41.441630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.263 [2024-10-07 11:31:41.441644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.263 [2024-10-07 11:31:41.441657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.263 [2024-10-07 11:31:41.441687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.263 [2024-10-07 11:31:41.441704] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.263 [2024-10-07 11:31:41.452663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.263 [2024-10-07 11:31:41.452712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.263 [2024-10-07 11:31:41.452806] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.263 [2024-10-07 11:31:41.452837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.263 [2024-10-07 11:31:41.452855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.263 [2024-10-07 11:31:41.452903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.263 [2024-10-07 11:31:41.452926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.263 [2024-10-07 11:31:41.452942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.263 [2024-10-07 11:31:41.452974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.263 [2024-10-07 11:31:41.452998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.263 [2024-10-07 11:31:41.453025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.263 [2024-10-07 11:31:41.453043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.263 [2024-10-07 11:31:41.453057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.263 [2024-10-07 11:31:41.453072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.263 [2024-10-07 11:31:41.453087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.263 [2024-10-07 11:31:41.453100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.263 [2024-10-07 11:31:41.453129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.263 [2024-10-07 11:31:41.453146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.263 [2024-10-07 11:31:41.463876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.263 [2024-10-07 11:31:41.463923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.263 [2024-10-07 11:31:41.465260] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.263 [2024-10-07 11:31:41.465305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.263 [2024-10-07 11:31:41.465373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.263 [2024-10-07 11:31:41.465429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.263 [2024-10-07 11:31:41.465454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.263 [2024-10-07 11:31:41.465470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.263 [2024-10-07 11:31:41.465627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.263 [2024-10-07 11:31:41.465659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.263 [2024-10-07 11:31:41.465689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.263 [2024-10-07 11:31:41.465707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.263 [2024-10-07 11:31:41.465721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.263 [2024-10-07 11:31:41.465738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.263 [2024-10-07 11:31:41.465752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.263 [2024-10-07 11:31:41.465766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.263 [2024-10-07 11:31:41.465796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.263 [2024-10-07 11:31:41.465814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.263 [2024-10-07 11:31:41.474086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.263 [2024-10-07 11:31:41.474136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.263 [2024-10-07 11:31:41.474227] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.263 [2024-10-07 11:31:41.474257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.263 [2024-10-07 11:31:41.474274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.263 [2024-10-07 11:31:41.474352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.263 [2024-10-07 11:31:41.474379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.263 [2024-10-07 11:31:41.474395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.263 [2024-10-07 11:31:41.475277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.263 [2024-10-07 11:31:41.475335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.263 [2024-10-07 11:31:41.475526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.263 [2024-10-07 11:31:41.475552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.263 [2024-10-07 11:31:41.475566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.263 [2024-10-07 11:31:41.475583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.263 [2024-10-07 11:31:41.475597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.263 [2024-10-07 11:31:41.475625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.263 [2024-10-07 11:31:41.475701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.263 [2024-10-07 11:31:41.475721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.263 [2024-10-07 11:31:41.484896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.263 [2024-10-07 11:31:41.484945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.263 [2024-10-07 11:31:41.485054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.263 [2024-10-07 11:31:41.485085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.263 [2024-10-07 11:31:41.485102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.263 [2024-10-07 11:31:41.485150] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.263 [2024-10-07 11:31:41.485174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.263 [2024-10-07 11:31:41.485190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.263 [2024-10-07 11:31:41.485223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.263 [2024-10-07 11:31:41.485247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.263 [2024-10-07 11:31:41.485273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.263 [2024-10-07 11:31:41.485291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.263 [2024-10-07 11:31:41.485305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.263 [2024-10-07 11:31:41.485321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.263 [2024-10-07 11:31:41.485351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.263 [2024-10-07 11:31:41.485366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.263 [2024-10-07 11:31:41.485933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.263 [2024-10-07 11:31:41.485960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.263 [2024-10-07 11:31:41.495792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.263 [2024-10-07 11:31:41.495841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.263 [2024-10-07 11:31:41.495934] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.263 [2024-10-07 11:31:41.495964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.263 [2024-10-07 11:31:41.495981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.263 [2024-10-07 11:31:41.496029] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.263 [2024-10-07 11:31:41.496051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.263 [2024-10-07 11:31:41.496067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.263 [2024-10-07 11:31:41.496099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.263 [2024-10-07 11:31:41.496122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.263 [2024-10-07 11:31:41.496167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.263 [2024-10-07 11:31:41.496186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.264 [2024-10-07 11:31:41.496200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.264 [2024-10-07 11:31:41.496217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.264 [2024-10-07 11:31:41.496231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.264 [2024-10-07 11:31:41.496244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.264 [2024-10-07 11:31:41.496275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.264 [2024-10-07 11:31:41.496292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.264 8643.60 IOPS, 33.76 MiB/s [2024-10-07T11:31:52.787Z] [2024-10-07 11:31:41.507506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.264 [2024-10-07 11:31:41.507558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.264 [2024-10-07 11:31:41.507690] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.264 [2024-10-07 11:31:41.507723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.264 [2024-10-07 11:31:41.507741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.264 [2024-10-07 11:31:41.507790] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.264 [2024-10-07 11:31:41.507813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.264 [2024-10-07 11:31:41.507829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.264 [2024-10-07 11:31:41.507862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.264 [2024-10-07 11:31:41.507885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.264 [2024-10-07 11:31:41.507930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.264 [2024-10-07 11:31:41.507952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.264 [2024-10-07 11:31:41.507966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.264 [2024-10-07 11:31:41.507983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.264 [2024-10-07 11:31:41.507997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.264 [2024-10-07 11:31:41.508011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.264 [2024-10-07 11:31:41.508041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.264 [2024-10-07 11:31:41.508058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.264 [2024-10-07 11:31:41.517629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.264 [2024-10-07 11:31:41.517702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.264 [2024-10-07 11:31:41.517781] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.264 [2024-10-07 11:31:41.517809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.264 [2024-10-07 11:31:41.517841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.264 [2024-10-07 11:31:41.517908] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.264 [2024-10-07 11:31:41.517935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.264 [2024-10-07 11:31:41.517951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.264 [2024-10-07 11:31:41.517969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.264 [2024-10-07 11:31:41.518001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.264 [2024-10-07 11:31:41.518022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.264 [2024-10-07 11:31:41.518036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.264 [2024-10-07 11:31:41.518050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.264 [2024-10-07 11:31:41.518080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.264 [2024-10-07 11:31:41.518098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.264 [2024-10-07 11:31:41.518111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.264 [2024-10-07 11:31:41.518124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.264 [2024-10-07 11:31:41.519332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.264 [2024-10-07 11:31:41.528138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.264 [2024-10-07 11:31:41.528188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.264 [2024-10-07 11:31:41.528288] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.264 [2024-10-07 11:31:41.528333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.264 [2024-10-07 11:31:41.528353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.264 [2024-10-07 11:31:41.528404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.264 [2024-10-07 11:31:41.528427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.264 [2024-10-07 11:31:41.528442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.264 [2024-10-07 11:31:41.528475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.264 [2024-10-07 11:31:41.528499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.264 [2024-10-07 11:31:41.528526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.264 [2024-10-07 11:31:41.528544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.264 [2024-10-07 11:31:41.528558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.264 [2024-10-07 11:31:41.528574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.264 [2024-10-07 11:31:41.528588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.264 [2024-10-07 11:31:41.528601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.264 [2024-10-07 11:31:41.529824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.264 [2024-10-07 11:31:41.529862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.264 [2024-10-07 11:31:41.538265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.264 [2024-10-07 11:31:41.538337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.264 [2024-10-07 11:31:41.538432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.264 [2024-10-07 11:31:41.538462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.264 [2024-10-07 11:31:41.538480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.264 [2024-10-07 11:31:41.538527] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.264 [2024-10-07 11:31:41.538550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.264 [2024-10-07 11:31:41.538566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.264 [2024-10-07 11:31:41.538752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.264 [2024-10-07 11:31:41.538784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.264 [2024-10-07 11:31:41.538913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.264 [2024-10-07 11:31:41.538938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.264 [2024-10-07 11:31:41.538953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.264 [2024-10-07 11:31:41.538970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.264 [2024-10-07 11:31:41.538984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.264 [2024-10-07 11:31:41.538997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.264 [2024-10-07 11:31:41.539113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.264 [2024-10-07 11:31:41.539136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.264 [2024-10-07 11:31:41.549504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.264 [2024-10-07 11:31:41.549553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.264 [2024-10-07 11:31:41.549644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.264 [2024-10-07 11:31:41.549674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.264 [2024-10-07 11:31:41.549691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.264 [2024-10-07 11:31:41.549738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.264 [2024-10-07 11:31:41.549762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.264 [2024-10-07 11:31:41.549777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.264 [2024-10-07 11:31:41.549809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.264 [2024-10-07 11:31:41.549833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.264 [2024-10-07 11:31:41.549860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.264 [2024-10-07 11:31:41.549893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.264 [2024-10-07 11:31:41.549909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.264 [2024-10-07 11:31:41.549926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.264 [2024-10-07 11:31:41.549940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.264 [2024-10-07 11:31:41.549953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.264 [2024-10-07 11:31:41.549984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.264 [2024-10-07 11:31:41.550001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.265 [2024-10-07 11:31:41.559628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.265 [2024-10-07 11:31:41.559700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.265 [2024-10-07 11:31:41.559779] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.265 [2024-10-07 11:31:41.559807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.265 [2024-10-07 11:31:41.559823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.265 [2024-10-07 11:31:41.559887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.265 [2024-10-07 11:31:41.559914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.265 [2024-10-07 11:31:41.559930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.265 [2024-10-07 11:31:41.559948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.265 [2024-10-07 11:31:41.561159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.265 [2024-10-07 11:31:41.561202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.265 [2024-10-07 11:31:41.561221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.265 [2024-10-07 11:31:41.561235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.265 [2024-10-07 11:31:41.561457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.265 [2024-10-07 11:31:41.561483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.265 [2024-10-07 11:31:41.561499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.265 [2024-10-07 11:31:41.561513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.265 [2024-10-07 11:31:41.562338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.265 [2024-10-07 11:31:41.570052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.265 [2024-10-07 11:31:41.570102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.265 [2024-10-07 11:31:41.570203] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.265 [2024-10-07 11:31:41.570234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.265 [2024-10-07 11:31:41.570251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.265 [2024-10-07 11:31:41.570345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.265 [2024-10-07 11:31:41.570373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.265 [2024-10-07 11:31:41.570390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.265 [2024-10-07 11:31:41.570424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.265 [2024-10-07 11:31:41.570447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.265 [2024-10-07 11:31:41.570474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.265 [2024-10-07 11:31:41.570492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.265 [2024-10-07 11:31:41.570506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.265 [2024-10-07 11:31:41.570523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.265 [2024-10-07 11:31:41.570537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.265 [2024-10-07 11:31:41.570550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.265 [2024-10-07 11:31:41.570579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.265 [2024-10-07 11:31:41.570596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.265 [2024-10-07 11:31:41.580180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.265 [2024-10-07 11:31:41.580231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.265 [2024-10-07 11:31:41.580338] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.265 [2024-10-07 11:31:41.580370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.265 [2024-10-07 11:31:41.580387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.265 [2024-10-07 11:31:41.580436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.265 [2024-10-07 11:31:41.580459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.265 [2024-10-07 11:31:41.580475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.265 [2024-10-07 11:31:41.580663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.265 [2024-10-07 11:31:41.580696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.265 [2024-10-07 11:31:41.580826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.265 [2024-10-07 11:31:41.580851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.265 [2024-10-07 11:31:41.580867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.265 [2024-10-07 11:31:41.580884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.265 [2024-10-07 11:31:41.580898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.265 [2024-10-07 11:31:41.580912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.265 [2024-10-07 11:31:41.581027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.265 [2024-10-07 11:31:41.581066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.265 [2024-10-07 11:31:41.591484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.265 [2024-10-07 11:31:41.591535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.265 [2024-10-07 11:31:41.591639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.265 [2024-10-07 11:31:41.591670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.265 [2024-10-07 11:31:41.591688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.265 [2024-10-07 11:31:41.591736] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.265 [2024-10-07 11:31:41.591759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.265 [2024-10-07 11:31:41.591775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.265 [2024-10-07 11:31:41.591807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.265 [2024-10-07 11:31:41.591830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.265 [2024-10-07 11:31:41.591856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.265 [2024-10-07 11:31:41.591874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.265 [2024-10-07 11:31:41.591888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.265 [2024-10-07 11:31:41.591904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.265 [2024-10-07 11:31:41.591919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.265 [2024-10-07 11:31:41.591932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.265 [2024-10-07 11:31:41.591961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.265 [2024-10-07 11:31:41.591979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.265 [2024-10-07 11:31:41.601604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.265 [2024-10-07 11:31:41.601677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.265 [2024-10-07 11:31:41.601754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.265 [2024-10-07 11:31:41.601783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.265 [2024-10-07 11:31:41.601800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.265 [2024-10-07 11:31:41.601863] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.265 [2024-10-07 11:31:41.601889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.265 [2024-10-07 11:31:41.601905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.265 [2024-10-07 11:31:41.601924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.265 [2024-10-07 11:31:41.601955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.265 [2024-10-07 11:31:41.601976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.265 [2024-10-07 11:31:41.601990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.265 [2024-10-07 11:31:41.602021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.265 [2024-10-07 11:31:41.602054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.265 [2024-10-07 11:31:41.602072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.265 [2024-10-07 11:31:41.602086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.265 [2024-10-07 11:31:41.602100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.265 [2024-10-07 11:31:41.603302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.265 [2024-10-07 11:31:41.612288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.265 [2024-10-07 11:31:41.612352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.265 [2024-10-07 11:31:41.612456] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.265 [2024-10-07 11:31:41.612487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.265 [2024-10-07 11:31:41.612504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.265 [2024-10-07 11:31:41.612553] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.265 [2024-10-07 11:31:41.612576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.265 [2024-10-07 11:31:41.612592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.265 [2024-10-07 11:31:41.612624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.265 [2024-10-07 11:31:41.612647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.265 [2024-10-07 11:31:41.612674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.265 [2024-10-07 11:31:41.612692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.265 [2024-10-07 11:31:41.612706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.265 [2024-10-07 11:31:41.612723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.265 [2024-10-07 11:31:41.612737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.265 [2024-10-07 11:31:41.612750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.266 [2024-10-07 11:31:41.612780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.266 [2024-10-07 11:31:41.612797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.266 [2024-10-07 11:31:41.622430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.266 [2024-10-07 11:31:41.622504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.266 [2024-10-07 11:31:41.622583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.266 [2024-10-07 11:31:41.622612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.266 [2024-10-07 11:31:41.622629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.266 [2024-10-07 11:31:41.622692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.266 [2024-10-07 11:31:41.622719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.266 [2024-10-07 11:31:41.622752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.266 [2024-10-07 11:31:41.622772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.266 [2024-10-07 11:31:41.622961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.266 [2024-10-07 11:31:41.622991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.266 [2024-10-07 11:31:41.623005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.266 [2024-10-07 11:31:41.623019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.266 [2024-10-07 11:31:41.623153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.266 [2024-10-07 11:31:41.623178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.266 [2024-10-07 11:31:41.623192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.266 [2024-10-07 11:31:41.623206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.266 [2024-10-07 11:31:41.623336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.266 [2024-10-07 11:31:41.633788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.266 [2024-10-07 11:31:41.633838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.266 [2024-10-07 11:31:41.633932] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.266 [2024-10-07 11:31:41.633963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.266 [2024-10-07 11:31:41.633980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.266 [2024-10-07 11:31:41.634028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.266 [2024-10-07 11:31:41.634052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.266 [2024-10-07 11:31:41.634067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.266 [2024-10-07 11:31:41.634099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.266 [2024-10-07 11:31:41.634122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.266 [2024-10-07 11:31:41.634149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.266 [2024-10-07 11:31:41.634167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.266 [2024-10-07 11:31:41.634183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.266 [2024-10-07 11:31:41.634199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.266 [2024-10-07 11:31:41.634213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.266 [2024-10-07 11:31:41.634226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.266 [2024-10-07 11:31:41.634256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.266 [2024-10-07 11:31:41.634273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.266 [2024-10-07 11:31:41.643912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.266 [2024-10-07 11:31:41.644006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.266 [2024-10-07 11:31:41.644086] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.266 [2024-10-07 11:31:41.644115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.266 [2024-10-07 11:31:41.644132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.266 [2024-10-07 11:31:41.644196] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.266 [2024-10-07 11:31:41.644223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.266 [2024-10-07 11:31:41.644239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.266 [2024-10-07 11:31:41.644257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.266 [2024-10-07 11:31:41.644289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.266 [2024-10-07 11:31:41.644310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.266 [2024-10-07 11:31:41.644341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.266 [2024-10-07 11:31:41.644356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.266 [2024-10-07 11:31:41.644388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.266 [2024-10-07 11:31:41.644407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.266 [2024-10-07 11:31:41.644421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.266 [2024-10-07 11:31:41.644434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.266 [2024-10-07 11:31:41.645621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.266 [2024-10-07 11:31:41.654620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.266 [2024-10-07 11:31:41.654678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.266 [2024-10-07 11:31:41.654778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.266 [2024-10-07 11:31:41.654808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.266 [2024-10-07 11:31:41.654825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.266 [2024-10-07 11:31:41.654872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.266 [2024-10-07 11:31:41.654895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.266 [2024-10-07 11:31:41.654911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.266 [2024-10-07 11:31:41.654957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.266 [2024-10-07 11:31:41.654984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.266 [2024-10-07 11:31:41.655011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.266 [2024-10-07 11:31:41.655028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.266 [2024-10-07 11:31:41.655043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.266 [2024-10-07 11:31:41.655074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.266 [2024-10-07 11:31:41.655091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.266 [2024-10-07 11:31:41.655104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.266 [2024-10-07 11:31:41.655134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.266 [2024-10-07 11:31:41.655152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.266 [2024-10-07 11:31:41.664749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.266 [2024-10-07 11:31:41.664823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.266 [2024-10-07 11:31:41.664905] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.266 [2024-10-07 11:31:41.664934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.266 [2024-10-07 11:31:41.664950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.266 [2024-10-07 11:31:41.665014] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.266 [2024-10-07 11:31:41.665040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.266 [2024-10-07 11:31:41.665057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.266 [2024-10-07 11:31:41.665075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.266 [2024-10-07 11:31:41.665263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.266 [2024-10-07 11:31:41.665292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.266 [2024-10-07 11:31:41.665308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.266 [2024-10-07 11:31:41.665339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.266 [2024-10-07 11:31:41.665475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.266 [2024-10-07 11:31:41.665500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.266 [2024-10-07 11:31:41.665515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.266 [2024-10-07 11:31:41.665529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.266 [2024-10-07 11:31:41.665653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.266 [2024-10-07 11:31:41.676131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.266 [2024-10-07 11:31:41.676248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.266 [2024-10-07 11:31:41.676344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.266 [2024-10-07 11:31:41.676374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.266 [2024-10-07 11:31:41.676391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.266 [2024-10-07 11:31:41.676457] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.266 [2024-10-07 11:31:41.676484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.266 [2024-10-07 11:31:41.676500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.266 [2024-10-07 11:31:41.676538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.266 [2024-10-07 11:31:41.676573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.266 [2024-10-07 11:31:41.676595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.266 [2024-10-07 11:31:41.676609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.266 [2024-10-07 11:31:41.676622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.266 [2024-10-07 11:31:41.676671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.267 [2024-10-07 11:31:41.676692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.267 [2024-10-07 11:31:41.676706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.267 [2024-10-07 11:31:41.676720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.267 [2024-10-07 11:31:41.676747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.267 [2024-10-07 11:31:41.686225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.267 [2024-10-07 11:31:41.686363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.267 [2024-10-07 11:31:41.686396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.267 [2024-10-07 11:31:41.686414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.267 [2024-10-07 11:31:41.686458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.267 [2024-10-07 11:31:41.686498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.267 [2024-10-07 11:31:41.686528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.267 [2024-10-07 11:31:41.686545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.267 [2024-10-07 11:31:41.686559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.267 [2024-10-07 11:31:41.686587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.267 [2024-10-07 11:31:41.686650] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.267 [2024-10-07 11:31:41.686675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.267 [2024-10-07 11:31:41.686691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.267 [2024-10-07 11:31:41.686722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.267 [2024-10-07 11:31:41.686753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.267 [2024-10-07 11:31:41.686771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.267 [2024-10-07 11:31:41.686785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.267 [2024-10-07 11:31:41.687974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.267 [2024-10-07 11:31:41.696964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.267 [2024-10-07 11:31:41.697016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.267 [2024-10-07 11:31:41.697135] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.267 [2024-10-07 11:31:41.697167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.267 [2024-10-07 11:31:41.697185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.267 [2024-10-07 11:31:41.697233] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.267 [2024-10-07 11:31:41.697257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.267 [2024-10-07 11:31:41.697272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.267 [2024-10-07 11:31:41.697304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.267 [2024-10-07 11:31:41.697347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.267 [2024-10-07 11:31:41.697377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.267 [2024-10-07 11:31:41.697397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.267 [2024-10-07 11:31:41.697411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.267 [2024-10-07 11:31:41.697427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.267 [2024-10-07 11:31:41.697442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.267 [2024-10-07 11:31:41.697455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.267 [2024-10-07 11:31:41.697484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.267 [2024-10-07 11:31:41.697502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.267 [2024-10-07 11:31:41.707108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.267 [2024-10-07 11:31:41.707158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.267 [2024-10-07 11:31:41.707250] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.267 [2024-10-07 11:31:41.707282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.267 [2024-10-07 11:31:41.707299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.267 [2024-10-07 11:31:41.707364] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.267 [2024-10-07 11:31:41.707389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.267 [2024-10-07 11:31:41.707406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.267 [2024-10-07 11:31:41.707594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.267 [2024-10-07 11:31:41.707626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.267 [2024-10-07 11:31:41.707756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.267 [2024-10-07 11:31:41.707781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.267 [2024-10-07 11:31:41.707795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.267 [2024-10-07 11:31:41.707813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.267 [2024-10-07 11:31:41.707844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.267 [2024-10-07 11:31:41.707859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.267 [2024-10-07 11:31:41.707977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.267 [2024-10-07 11:31:41.708001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.267 [2024-10-07 11:31:41.718559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.267 [2024-10-07 11:31:41.718613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.267 [2024-10-07 11:31:41.718717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.267 [2024-10-07 11:31:41.718749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.267 [2024-10-07 11:31:41.718767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.267 [2024-10-07 11:31:41.718816] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.267 [2024-10-07 11:31:41.718839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.267 [2024-10-07 11:31:41.718855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.267 [2024-10-07 11:31:41.718888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.267 [2024-10-07 11:31:41.718911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.267 [2024-10-07 11:31:41.718938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.267 [2024-10-07 11:31:41.718956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.267 [2024-10-07 11:31:41.718970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.267 [2024-10-07 11:31:41.718986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.267 [2024-10-07 11:31:41.719001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.267 [2024-10-07 11:31:41.719014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.267 [2024-10-07 11:31:41.719043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.267 [2024-10-07 11:31:41.719061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.267 [2024-10-07 11:31:41.728698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.267 [2024-10-07 11:31:41.728772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.267 [2024-10-07 11:31:41.728852] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.267 [2024-10-07 11:31:41.728881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.267 [2024-10-07 11:31:41.728898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.267 [2024-10-07 11:31:41.728962] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.267 [2024-10-07 11:31:41.728988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.267 [2024-10-07 11:31:41.729005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.267 [2024-10-07 11:31:41.729040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.267 [2024-10-07 11:31:41.729092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.267 [2024-10-07 11:31:41.729118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.267 [2024-10-07 11:31:41.729133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.267 [2024-10-07 11:31:41.729146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.267 [2024-10-07 11:31:41.730363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.267 [2024-10-07 11:31:41.730403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.267 [2024-10-07 11:31:41.730420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.267 [2024-10-07 11:31:41.730435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.267 [2024-10-07 11:31:41.730654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.267 [2024-10-07 11:31:41.739218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.267 [2024-10-07 11:31:41.739277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.268 [2024-10-07 11:31:41.739395] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.268 [2024-10-07 11:31:41.739427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.268 [2024-10-07 11:31:41.739445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.268 [2024-10-07 11:31:41.739493] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.268 [2024-10-07 11:31:41.739516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.268 [2024-10-07 11:31:41.739532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.268 [2024-10-07 11:31:41.739564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.268 [2024-10-07 11:31:41.739589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.268 [2024-10-07 11:31:41.739616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.268 [2024-10-07 11:31:41.739633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.268 [2024-10-07 11:31:41.739647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.268 [2024-10-07 11:31:41.739664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.268 [2024-10-07 11:31:41.739678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.268 [2024-10-07 11:31:41.739691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.268 [2024-10-07 11:31:41.739721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.268 [2024-10-07 11:31:41.739739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.268 [2024-10-07 11:31:41.749372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.268 [2024-10-07 11:31:41.749423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.268 [2024-10-07 11:31:41.749516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.268 [2024-10-07 11:31:41.749560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.268 [2024-10-07 11:31:41.749580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.268 [2024-10-07 11:31:41.749631] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.268 [2024-10-07 11:31:41.749655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.268 [2024-10-07 11:31:41.749670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.268 [2024-10-07 11:31:41.749858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.268 [2024-10-07 11:31:41.749890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.268 [2024-10-07 11:31:41.750020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.268 [2024-10-07 11:31:41.750046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.268 [2024-10-07 11:31:41.750061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.268 [2024-10-07 11:31:41.750077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.268 [2024-10-07 11:31:41.750092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.268 [2024-10-07 11:31:41.750105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.268 [2024-10-07 11:31:41.750221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.268 [2024-10-07 11:31:41.750245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.268 [2024-10-07 11:31:41.760723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.268 [2024-10-07 11:31:41.760774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.268 [2024-10-07 11:31:41.760868] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.268 [2024-10-07 11:31:41.760898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.268 [2024-10-07 11:31:41.760916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.268 [2024-10-07 11:31:41.760963] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.268 [2024-10-07 11:31:41.760986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.268 [2024-10-07 11:31:41.761002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.268 [2024-10-07 11:31:41.761034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.268 [2024-10-07 11:31:41.761058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.268 [2024-10-07 11:31:41.761084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.268 [2024-10-07 11:31:41.761102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.268 [2024-10-07 11:31:41.761116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.268 [2024-10-07 11:31:41.761132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.268 [2024-10-07 11:31:41.761147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.268 [2024-10-07 11:31:41.761175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.268 [2024-10-07 11:31:41.761207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.268 [2024-10-07 11:31:41.761225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.268 [2024-10-07 11:31:41.770853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.268 [2024-10-07 11:31:41.770905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.268 [2024-10-07 11:31:41.770997] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.268 [2024-10-07 11:31:41.771028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.268 [2024-10-07 11:31:41.771045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.268 [2024-10-07 11:31:41.771093] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.268 [2024-10-07 11:31:41.771116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.268 [2024-10-07 11:31:41.771132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.268 [2024-10-07 11:31:41.771164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.268 [2024-10-07 11:31:41.771188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.268 [2024-10-07 11:31:41.771214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.268 [2024-10-07 11:31:41.771232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.268 [2024-10-07 11:31:41.771247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.268 [2024-10-07 11:31:41.771263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.268 [2024-10-07 11:31:41.771277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.268 [2024-10-07 11:31:41.771291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.268 [2024-10-07 11:31:41.772506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.268 [2024-10-07 11:31:41.772545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.268 [2024-10-07 11:31:41.781371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.268 [2024-10-07 11:31:41.781422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.268 [2024-10-07 11:31:41.781547] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.268 [2024-10-07 11:31:41.781595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.268 [2024-10-07 11:31:41.781625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.268 [2024-10-07 11:31:41.781705] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.268 [2024-10-07 11:31:41.781739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.268 [2024-10-07 11:31:41.781766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.268 [2024-10-07 11:31:41.781814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.268 [2024-10-07 11:31:41.781868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.268 [2024-10-07 11:31:41.783168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.268 [2024-10-07 11:31:41.783213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.268 [2024-10-07 11:31:41.783232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.268 [2024-10-07 11:31:41.783250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.268 [2024-10-07 11:31:41.783265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.268 [2024-10-07 11:31:41.783278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.268 [2024-10-07 11:31:41.783527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.268 [2024-10-07 11:31:41.783554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.268 [2024-10-07 11:31:41.791502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.268 [2024-10-07 11:31:41.791550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.268 [2024-10-07 11:31:41.791644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.268 [2024-10-07 11:31:41.791675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.268 [2024-10-07 11:31:41.791692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.268 [2024-10-07 11:31:41.791740] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.268 [2024-10-07 11:31:41.791763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.268 [2024-10-07 11:31:41.791779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.268 [2024-10-07 11:31:41.791966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.268 [2024-10-07 11:31:41.791998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.268 [2024-10-07 11:31:41.792127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.268 [2024-10-07 11:31:41.792152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.268 [2024-10-07 11:31:41.792167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.268 [2024-10-07 11:31:41.792184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.268 [2024-10-07 11:31:41.792198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.268 [2024-10-07 11:31:41.792212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.268 [2024-10-07 11:31:41.792342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.269 [2024-10-07 11:31:41.792367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.269 [2024-10-07 11:31:41.802817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.269 [2024-10-07 11:31:41.802868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.269 [2024-10-07 11:31:41.802961] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.269 [2024-10-07 11:31:41.802991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.269 [2024-10-07 11:31:41.803025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.269 [2024-10-07 11:31:41.803079] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.269 [2024-10-07 11:31:41.803103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.269 [2024-10-07 11:31:41.803119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.269 [2024-10-07 11:31:41.803152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.269 [2024-10-07 11:31:41.803176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.269 [2024-10-07 11:31:41.803203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.269 [2024-10-07 11:31:41.803221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.269 [2024-10-07 11:31:41.803235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.269 [2024-10-07 11:31:41.803252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.269 [2024-10-07 11:31:41.803266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.269 [2024-10-07 11:31:41.803279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.269 [2024-10-07 11:31:41.803308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.269 [2024-10-07 11:31:41.803343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.269 [2024-10-07 11:31:41.812945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.269 [2024-10-07 11:31:41.813018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.269 [2024-10-07 11:31:41.813096] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.269 [2024-10-07 11:31:41.813124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.269 [2024-10-07 11:31:41.813142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.269 [2024-10-07 11:31:41.813205] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.269 [2024-10-07 11:31:41.813232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.269 [2024-10-07 11:31:41.813248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.269 [2024-10-07 11:31:41.813267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.269 [2024-10-07 11:31:41.813299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.269 [2024-10-07 11:31:41.813334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.269 [2024-10-07 11:31:41.813351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.269 [2024-10-07 11:31:41.813365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.269 [2024-10-07 11:31:41.814563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.269 [2024-10-07 11:31:41.814602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.269 [2024-10-07 11:31:41.814620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.269 [2024-10-07 11:31:41.814649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.269 [2024-10-07 11:31:41.814871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.269 [2024-10-07 11:31:41.823497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.269 [2024-10-07 11:31:41.823547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.269 [2024-10-07 11:31:41.823651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.269 [2024-10-07 11:31:41.823681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.269 [2024-10-07 11:31:41.823699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.269 [2024-10-07 11:31:41.823747] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.269 [2024-10-07 11:31:41.823770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.269 [2024-10-07 11:31:41.823786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.269 [2024-10-07 11:31:41.823818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.269 [2024-10-07 11:31:41.823841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.269 [2024-10-07 11:31:41.823886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.269 [2024-10-07 11:31:41.823907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.269 [2024-10-07 11:31:41.823921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.269 [2024-10-07 11:31:41.823938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.269 [2024-10-07 11:31:41.823952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.269 [2024-10-07 11:31:41.823965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.269 [2024-10-07 11:31:41.823994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.269 [2024-10-07 11:31:41.824012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.269 [2024-10-07 11:31:41.833619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.269 [2024-10-07 11:31:41.833691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.269 [2024-10-07 11:31:41.833770] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.269 [2024-10-07 11:31:41.833799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.269 [2024-10-07 11:31:41.833816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.269 [2024-10-07 11:31:41.834042] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.269 [2024-10-07 11:31:41.834073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.269 [2024-10-07 11:31:41.834090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.269 [2024-10-07 11:31:41.834109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.269 [2024-10-07 11:31:41.834241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.269 [2024-10-07 11:31:41.834269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.269 [2024-10-07 11:31:41.834328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.269 [2024-10-07 11:31:41.834348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.269 [2024-10-07 11:31:41.834471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.269 [2024-10-07 11:31:41.834497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.269 [2024-10-07 11:31:41.834511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.269 [2024-10-07 11:31:41.834524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.269 [2024-10-07 11:31:41.834581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.269 [2024-10-07 11:31:41.844811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.269 [2024-10-07 11:31:41.844863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.269 [2024-10-07 11:31:41.844957] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.269 [2024-10-07 11:31:41.844988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.269 [2024-10-07 11:31:41.845005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.269 [2024-10-07 11:31:41.845061] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.269 [2024-10-07 11:31:41.845084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.269 [2024-10-07 11:31:41.845099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.269 [2024-10-07 11:31:41.845131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.269 [2024-10-07 11:31:41.845155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.269 [2024-10-07 11:31:41.845182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.269 [2024-10-07 11:31:41.845200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.269 [2024-10-07 11:31:41.845214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.269 [2024-10-07 11:31:41.845231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.269 [2024-10-07 11:31:41.845245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.269 [2024-10-07 11:31:41.845258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.269 [2024-10-07 11:31:41.845287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.269 [2024-10-07 11:31:41.845304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.269 [2024-10-07 11:31:41.854937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.269 [2024-10-07 11:31:41.855009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.269 [2024-10-07 11:31:41.855089] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.269 [2024-10-07 11:31:41.855117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.269 [2024-10-07 11:31:41.855134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.269 [2024-10-07 11:31:41.855229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.269 [2024-10-07 11:31:41.855256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.269 [2024-10-07 11:31:41.855273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.269 [2024-10-07 11:31:41.855291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.269 [2024-10-07 11:31:41.855340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.269 [2024-10-07 11:31:41.855364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.269 [2024-10-07 11:31:41.855379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.269 [2024-10-07 11:31:41.855392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.270 [2024-10-07 11:31:41.855423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.270 [2024-10-07 11:31:41.855441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.270 [2024-10-07 11:31:41.855455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.270 [2024-10-07 11:31:41.855468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.270 [2024-10-07 11:31:41.856679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.270 [2024-10-07 11:31:41.865033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.270 [2024-10-07 11:31:41.865157] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.270 [2024-10-07 11:31:41.865189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.270 [2024-10-07 11:31:41.865207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.270 [2024-10-07 11:31:41.865252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.270 [2024-10-07 11:31:41.865977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.270 [2024-10-07 11:31:41.866030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.270 [2024-10-07 11:31:41.866049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.270 [2024-10-07 11:31:41.866063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.270 [2024-10-07 11:31:41.866682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.270 [2024-10-07 11:31:41.866765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.270 [2024-10-07 11:31:41.866792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.270 [2024-10-07 11:31:41.866809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.270 [2024-10-07 11:31:41.867055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.270 [2024-10-07 11:31:41.867127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.270 [2024-10-07 11:31:41.867151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.270 [2024-10-07 11:31:41.867166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.270 [2024-10-07 11:31:41.867212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.270 [2024-10-07 11:31:41.875126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.270 [2024-10-07 11:31:41.875239] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.270 [2024-10-07 11:31:41.875271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.270 [2024-10-07 11:31:41.875288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.270 [2024-10-07 11:31:41.875337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.270 [2024-10-07 11:31:41.875380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.270 [2024-10-07 11:31:41.875399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.270 [2024-10-07 11:31:41.875413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.270 [2024-10-07 11:31:41.875443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.270 [2024-10-07 11:31:41.876865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.270 [2024-10-07 11:31:41.876976] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.270 [2024-10-07 11:31:41.877007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.270 [2024-10-07 11:31:41.877025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.270 [2024-10-07 11:31:41.877056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.270 [2024-10-07 11:31:41.877089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.270 [2024-10-07 11:31:41.877107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.270 [2024-10-07 11:31:41.877121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.270 [2024-10-07 11:31:41.877151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.270 [2024-10-07 11:31:41.885219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.270 [2024-10-07 11:31:41.885347] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.270 [2024-10-07 11:31:41.885379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.270 [2024-10-07 11:31:41.885396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.270 [2024-10-07 11:31:41.885429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.270 [2024-10-07 11:31:41.885461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.270 [2024-10-07 11:31:41.885480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.270 [2024-10-07 11:31:41.885494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.270 [2024-10-07 11:31:41.885525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.270 [2024-10-07 11:31:41.886954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.270 [2024-10-07 11:31:41.887066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.270 [2024-10-07 11:31:41.887097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.270 [2024-10-07 11:31:41.887131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.270 [2024-10-07 11:31:41.887165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.270 [2024-10-07 11:31:41.887197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.270 [2024-10-07 11:31:41.887215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.270 [2024-10-07 11:31:41.887229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.270 [2024-10-07 11:31:41.887273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.270 [2024-10-07 11:31:41.895352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.270 [2024-10-07 11:31:41.895467] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.270 [2024-10-07 11:31:41.895499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.270 [2024-10-07 11:31:41.895516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.270 [2024-10-07 11:31:41.895548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.270 [2024-10-07 11:31:41.895579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.270 [2024-10-07 11:31:41.895597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.270 [2024-10-07 11:31:41.895611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.270 [2024-10-07 11:31:41.895641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.270 [2024-10-07 11:31:41.898691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.270 [2024-10-07 11:31:41.899772] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.270 [2024-10-07 11:31:41.899817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.270 [2024-10-07 11:31:41.899837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.270 [2024-10-07 11:31:41.900188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.270 [2024-10-07 11:31:41.900279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.270 [2024-10-07 11:31:41.900305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.270 [2024-10-07 11:31:41.900335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.270 [2024-10-07 11:31:41.900380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.270 [2024-10-07 11:31:41.905700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.270 [2024-10-07 11:31:41.905813] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.270 [2024-10-07 11:31:41.905843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.270 [2024-10-07 11:31:41.905861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.270 [2024-10-07 11:31:41.905893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.270 [2024-10-07 11:31:41.905924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.270 [2024-10-07 11:31:41.905960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.270 [2024-10-07 11:31:41.905975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.270 [2024-10-07 11:31:41.906007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.270 [2024-10-07 11:31:41.908975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.270 [2024-10-07 11:31:41.909085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.270 [2024-10-07 11:31:41.909116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.270 [2024-10-07 11:31:41.909133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.270 [2024-10-07 11:31:41.909164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.270 [2024-10-07 11:31:41.909196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.270 [2024-10-07 11:31:41.909214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.270 [2024-10-07 11:31:41.909229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.270 [2024-10-07 11:31:41.910130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.270 [2024-10-07 11:31:41.916622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.270 [2024-10-07 11:31:41.916735] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.270 [2024-10-07 11:31:41.916766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.270 [2024-10-07 11:31:41.916784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.270 [2024-10-07 11:31:41.916815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.270 [2024-10-07 11:31:41.916848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.270 [2024-10-07 11:31:41.916866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.270 [2024-10-07 11:31:41.916880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.270 [2024-10-07 11:31:41.916909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.270 [2024-10-07 11:31:41.919831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.270 [2024-10-07 11:31:41.919942] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.270 [2024-10-07 11:31:41.919973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.271 [2024-10-07 11:31:41.919990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.271 [2024-10-07 11:31:41.920022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.271 [2024-10-07 11:31:41.920054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.271 [2024-10-07 11:31:41.920072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.271 [2024-10-07 11:31:41.920087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.271 [2024-10-07 11:31:41.920116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.271 [2024-10-07 11:31:41.928492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.271 [2024-10-07 11:31:41.928620] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.271 [2024-10-07 11:31:41.928652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.271 [2024-10-07 11:31:41.928670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.271 [2024-10-07 11:31:41.928702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.271 [2024-10-07 11:31:41.928734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.271 [2024-10-07 11:31:41.928752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.271 [2024-10-07 11:31:41.928766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.271 [2024-10-07 11:31:41.928796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.271 [2024-10-07 11:31:41.930725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.271 [2024-10-07 11:31:41.930835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.271 [2024-10-07 11:31:41.930866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.271 [2024-10-07 11:31:41.930883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.271 [2024-10-07 11:31:41.930915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.271 [2024-10-07 11:31:41.930948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.271 [2024-10-07 11:31:41.930966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.271 [2024-10-07 11:31:41.930980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.271 [2024-10-07 11:31:41.931010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.271 [2024-10-07 11:31:41.938597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.271 [2024-10-07 11:31:41.938709] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.271 [2024-10-07 11:31:41.938740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.271 [2024-10-07 11:31:41.938758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.271 [2024-10-07 11:31:41.938790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.271 [2024-10-07 11:31:41.938822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.271 [2024-10-07 11:31:41.938840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.271 [2024-10-07 11:31:41.938855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.271 [2024-10-07 11:31:41.938885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.271 [2024-10-07 11:31:41.942527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.271 [2024-10-07 11:31:41.942641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.271 [2024-10-07 11:31:41.942672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.271 [2024-10-07 11:31:41.942689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.271 [2024-10-07 11:31:41.942744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.271 [2024-10-07 11:31:41.942777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.271 [2024-10-07 11:31:41.942796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.271 [2024-10-07 11:31:41.942810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.271 [2024-10-07 11:31:41.942840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.271 [2024-10-07 11:31:41.949152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.271 [2024-10-07 11:31:41.949276] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.271 [2024-10-07 11:31:41.949308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.271 [2024-10-07 11:31:41.949343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.271 [2024-10-07 11:31:41.949377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.271 [2024-10-07 11:31:41.949410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.271 [2024-10-07 11:31:41.949428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.271 [2024-10-07 11:31:41.949442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.271 [2024-10-07 11:31:41.949472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.271 [2024-10-07 11:31:41.952620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.271 [2024-10-07 11:31:41.952729] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.271 [2024-10-07 11:31:41.952760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.271 [2024-10-07 11:31:41.952778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.271 [2024-10-07 11:31:41.952810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.271 [2024-10-07 11:31:41.952842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.271 [2024-10-07 11:31:41.952861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.271 [2024-10-07 11:31:41.952875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.271 [2024-10-07 11:31:41.952905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.271 [2024-10-07 11:31:41.959242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.271 [2024-10-07 11:31:41.959367] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.271 [2024-10-07 11:31:41.959399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.271 [2024-10-07 11:31:41.959417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.271 [2024-10-07 11:31:41.959449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.271 [2024-10-07 11:31:41.959482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.271 [2024-10-07 11:31:41.959500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.271 [2024-10-07 11:31:41.959534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.271 [2024-10-07 11:31:41.959721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.271 [2024-10-07 11:31:41.963222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.271 [2024-10-07 11:31:41.963354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.271 [2024-10-07 11:31:41.963386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.271 [2024-10-07 11:31:41.963404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.271 [2024-10-07 11:31:41.963438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.271 [2024-10-07 11:31:41.963470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.271 [2024-10-07 11:31:41.963488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.271 [2024-10-07 11:31:41.963502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.271 [2024-10-07 11:31:41.963532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.271 [2024-10-07 11:31:41.970682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.271 [2024-10-07 11:31:41.970798] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.271 [2024-10-07 11:31:41.970829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.271 [2024-10-07 11:31:41.970847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.271 [2024-10-07 11:31:41.970879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.271 [2024-10-07 11:31:41.970911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.271 [2024-10-07 11:31:41.970929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.271 [2024-10-07 11:31:41.970943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.271 [2024-10-07 11:31:41.970974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.271 [2024-10-07 11:31:41.973347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.271 [2024-10-07 11:31:41.973453] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.271 [2024-10-07 11:31:41.973484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.271 [2024-10-07 11:31:41.973501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.271 [2024-10-07 11:31:41.973532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.271 [2024-10-07 11:31:41.973564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.271 [2024-10-07 11:31:41.973582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.272 [2024-10-07 11:31:41.973597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.272 [2024-10-07 11:31:41.973626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.272 [2024-10-07 11:31:41.980776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.272 [2024-10-07 11:31:41.980890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.272 [2024-10-07 11:31:41.980937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.272 [2024-10-07 11:31:41.980957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.272 [2024-10-07 11:31:41.980989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.272 [2024-10-07 11:31:41.981022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.272 [2024-10-07 11:31:41.981040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.272 [2024-10-07 11:31:41.981054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.272 [2024-10-07 11:31:41.981084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.272 [2024-10-07 11:31:41.984761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.272 [2024-10-07 11:31:41.984876] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.272 [2024-10-07 11:31:41.984908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.272 [2024-10-07 11:31:41.984925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.272 [2024-10-07 11:31:41.984957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.272 [2024-10-07 11:31:41.984989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.272 [2024-10-07 11:31:41.985007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.272 [2024-10-07 11:31:41.985022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.272 [2024-10-07 11:31:41.985052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.272 [2024-10-07 11:31:41.991358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.272 [2024-10-07 11:31:41.991487] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.272 [2024-10-07 11:31:41.991518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.272 [2024-10-07 11:31:41.991537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.272 [2024-10-07 11:31:41.991569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.272 [2024-10-07 11:31:41.991601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.272 [2024-10-07 11:31:41.991619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.272 [2024-10-07 11:31:41.991633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.272 [2024-10-07 11:31:41.991663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.272 [2024-10-07 11:31:41.994852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.272 [2024-10-07 11:31:41.994961] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.272 [2024-10-07 11:31:41.994991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.272 [2024-10-07 11:31:41.995009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.272 [2024-10-07 11:31:41.995040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.272 [2024-10-07 11:31:41.995091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.272 [2024-10-07 11:31:41.995111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.272 [2024-10-07 11:31:41.995126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.272 [2024-10-07 11:31:41.995172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.272 [2024-10-07 11:31:42.001454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.272 [2024-10-07 11:31:42.001566] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.272 [2024-10-07 11:31:42.001598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.272 [2024-10-07 11:31:42.001616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.272 [2024-10-07 11:31:42.001648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.272 [2024-10-07 11:31:42.001680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.272 [2024-10-07 11:31:42.001698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.272 [2024-10-07 11:31:42.001713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.272 [2024-10-07 11:31:42.001743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.272 [2024-10-07 11:31:42.004941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.272 [2024-10-07 11:31:42.005052] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.272 [2024-10-07 11:31:42.005083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.272 [2024-10-07 11:31:42.005101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.272 [2024-10-07 11:31:42.005684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.272 [2024-10-07 11:31:42.005871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.272 [2024-10-07 11:31:42.005907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.272 [2024-10-07 11:31:42.005924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.272 [2024-10-07 11:31:42.006031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.272 [2024-10-07 11:31:42.013301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.272 [2024-10-07 11:31:42.013427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.272 [2024-10-07 11:31:42.013459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.272 [2024-10-07 11:31:42.013477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.272 [2024-10-07 11:31:42.013509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.272 [2024-10-07 11:31:42.013542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.272 [2024-10-07 11:31:42.013559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.272 [2024-10-07 11:31:42.013574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.272 [2024-10-07 11:31:42.013622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.272 [2024-10-07 11:31:42.015496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.272 [2024-10-07 11:31:42.015608] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.272 [2024-10-07 11:31:42.015639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.272 [2024-10-07 11:31:42.015657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.272 [2024-10-07 11:31:42.015689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.272 [2024-10-07 11:31:42.015721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.272 [2024-10-07 11:31:42.015739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.272 [2024-10-07 11:31:42.015754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.272 [2024-10-07 11:31:42.015783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.272 [2024-10-07 11:31:42.023403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.272 [2024-10-07 11:31:42.023516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.272 [2024-10-07 11:31:42.023547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.272 [2024-10-07 11:31:42.023565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.272 [2024-10-07 11:31:42.023597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.272 [2024-10-07 11:31:42.023629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.272 [2024-10-07 11:31:42.023648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.272 [2024-10-07 11:31:42.023662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.272 [2024-10-07 11:31:42.023692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.272 [2024-10-07 11:31:42.027311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.272 [2024-10-07 11:31:42.027473] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.272 [2024-10-07 11:31:42.027506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.272 [2024-10-07 11:31:42.027523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.272 [2024-10-07 11:31:42.027556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.272 [2024-10-07 11:31:42.027588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.272 [2024-10-07 11:31:42.027607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.272 [2024-10-07 11:31:42.027621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.272 [2024-10-07 11:31:42.027651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.272 [2024-10-07 11:31:42.034055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.272 [2024-10-07 11:31:42.034175] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.272 [2024-10-07 11:31:42.034205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.272 [2024-10-07 11:31:42.034241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.272 [2024-10-07 11:31:42.034275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.272 [2024-10-07 11:31:42.034338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.272 [2024-10-07 11:31:42.034361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.272 [2024-10-07 11:31:42.034375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.272 [2024-10-07 11:31:42.034406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.272 [2024-10-07 11:31:42.037413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.272 [2024-10-07 11:31:42.037521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.273 [2024-10-07 11:31:42.037552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.273 [2024-10-07 11:31:42.037570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.273 [2024-10-07 11:31:42.037602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.273 [2024-10-07 11:31:42.037634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.273 [2024-10-07 11:31:42.037653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.273 [2024-10-07 11:31:42.037667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.273 [2024-10-07 11:31:42.037697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.273 [2024-10-07 11:31:42.044145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.273 [2024-10-07 11:31:42.044259] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.273 [2024-10-07 11:31:42.044290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.273 [2024-10-07 11:31:42.044307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.273 [2024-10-07 11:31:42.044355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.273 [2024-10-07 11:31:42.044387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.273 [2024-10-07 11:31:42.044405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.273 [2024-10-07 11:31:42.044419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.273 [2024-10-07 11:31:42.044602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.273 [2024-10-07 11:31:42.048124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.273 [2024-10-07 11:31:42.048245] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.273 [2024-10-07 11:31:42.048276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.273 [2024-10-07 11:31:42.048294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.273 [2024-10-07 11:31:42.048341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.273 [2024-10-07 11:31:42.048377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.273 [2024-10-07 11:31:42.048411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.273 [2024-10-07 11:31:42.048427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.273 [2024-10-07 11:31:42.048458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.273 [2024-10-07 11:31:42.055546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.273 [2024-10-07 11:31:42.055658] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.273 [2024-10-07 11:31:42.055689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.273 [2024-10-07 11:31:42.055706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.273 [2024-10-07 11:31:42.055739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.273 [2024-10-07 11:31:42.055771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.273 [2024-10-07 11:31:42.055789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.273 [2024-10-07 11:31:42.055804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.273 [2024-10-07 11:31:42.055833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.273 [2024-10-07 11:31:42.058216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.273 [2024-10-07 11:31:42.058348] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.273 [2024-10-07 11:31:42.058381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.273 [2024-10-07 11:31:42.058398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.273 [2024-10-07 11:31:42.058431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.273 [2024-10-07 11:31:42.058464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.273 [2024-10-07 11:31:42.058482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.273 [2024-10-07 11:31:42.058496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.273 [2024-10-07 11:31:42.058680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.273 [2024-10-07 11:31:42.065638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.273 [2024-10-07 11:31:42.065756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.273 [2024-10-07 11:31:42.065788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.273 [2024-10-07 11:31:42.065805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.273 [2024-10-07 11:31:42.065837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.273 [2024-10-07 11:31:42.065869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.273 [2024-10-07 11:31:42.065887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.273 [2024-10-07 11:31:42.065902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.273 [2024-10-07 11:31:42.065932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.273 [2024-10-07 11:31:42.069603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.273 [2024-10-07 11:31:42.069754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.273 [2024-10-07 11:31:42.069787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.273 [2024-10-07 11:31:42.069805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.273 [2024-10-07 11:31:42.069837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.273 [2024-10-07 11:31:42.069872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.273 [2024-10-07 11:31:42.069890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.273 [2024-10-07 11:31:42.069905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.273 [2024-10-07 11:31:42.069935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.273 [2024-10-07 11:31:42.075736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.273 [2024-10-07 11:31:42.075852] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.273 [2024-10-07 11:31:42.075884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.273 [2024-10-07 11:31:42.075901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.273 [2024-10-07 11:31:42.075933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.273 [2024-10-07 11:31:42.075966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.273 [2024-10-07 11:31:42.075984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.273 [2024-10-07 11:31:42.075998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.273 [2024-10-07 11:31:42.076580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.273 [2024-10-07 11:31:42.080116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.273 [2024-10-07 11:31:42.080229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.273 [2024-10-07 11:31:42.080260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.273 [2024-10-07 11:31:42.080277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.273 [2024-10-07 11:31:42.080310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.273 [2024-10-07 11:31:42.080358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.273 [2024-10-07 11:31:42.080377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.273 [2024-10-07 11:31:42.080392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.273 [2024-10-07 11:31:42.080422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.273 [2024-10-07 11:31:42.086478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.273 [2024-10-07 11:31:42.086592] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.273 [2024-10-07 11:31:42.086623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.273 [2024-10-07 11:31:42.086640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.273 [2024-10-07 11:31:42.086690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.273 [2024-10-07 11:31:42.086724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.273 [2024-10-07 11:31:42.086742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.273 [2024-10-07 11:31:42.086756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.273 [2024-10-07 11:31:42.086787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.273 [2024-10-07 11:31:42.090980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.273 [2024-10-07 11:31:42.091100] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.273 [2024-10-07 11:31:42.091132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.273 [2024-10-07 11:31:42.091149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.273 [2024-10-07 11:31:42.091182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.273 [2024-10-07 11:31:42.091215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.273 [2024-10-07 11:31:42.091233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.273 [2024-10-07 11:31:42.091247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.273 [2024-10-07 11:31:42.091277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.273 [2024-10-07 11:31:42.098421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.273 [2024-10-07 11:31:42.098536] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.273 [2024-10-07 11:31:42.098567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.273 [2024-10-07 11:31:42.098585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.273 [2024-10-07 11:31:42.098618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.273 [2024-10-07 11:31:42.098651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.273 [2024-10-07 11:31:42.098669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.273 [2024-10-07 11:31:42.098684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.273 [2024-10-07 11:31:42.098714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.274 [2024-10-07 11:31:42.101070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.274 [2024-10-07 11:31:42.101178] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.274 [2024-10-07 11:31:42.101209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.274 [2024-10-07 11:31:42.101226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.274 [2024-10-07 11:31:42.101258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.274 [2024-10-07 11:31:42.101290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.274 [2024-10-07 11:31:42.101308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.274 [2024-10-07 11:31:42.101355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.274 [2024-10-07 11:31:42.101543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.274 [2024-10-07 11:31:42.108518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.274 [2024-10-07 11:31:42.108632] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.274 [2024-10-07 11:31:42.108663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.274 [2024-10-07 11:31:42.108681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.274 [2024-10-07 11:31:42.108713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.274 [2024-10-07 11:31:42.108745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.274 [2024-10-07 11:31:42.108763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.274 [2024-10-07 11:31:42.108777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.274 [2024-10-07 11:31:42.108807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.274 [2024-10-07 11:31:42.112433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.274 [2024-10-07 11:31:42.112546] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.274 [2024-10-07 11:31:42.112578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.274 [2024-10-07 11:31:42.112595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.274 [2024-10-07 11:31:42.112627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.274 [2024-10-07 11:31:42.112660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.274 [2024-10-07 11:31:42.112678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.274 [2024-10-07 11:31:42.112692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.274 [2024-10-07 11:31:42.112722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.274 [2024-10-07 11:31:42.119017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.274 [2024-10-07 11:31:42.119137] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.274 [2024-10-07 11:31:42.119169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.274 [2024-10-07 11:31:42.119186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.274 [2024-10-07 11:31:42.119233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.274 [2024-10-07 11:31:42.119269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.274 [2024-10-07 11:31:42.119288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.274 [2024-10-07 11:31:42.119302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.274 [2024-10-07 11:31:42.119349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.274 [2024-10-07 11:31:42.122521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.274 [2024-10-07 11:31:42.122632] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.274 [2024-10-07 11:31:42.122678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.274 [2024-10-07 11:31:42.122698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.274 [2024-10-07 11:31:42.122730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.274 [2024-10-07 11:31:42.122762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.274 [2024-10-07 11:31:42.122781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.274 [2024-10-07 11:31:42.122795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.274 [2024-10-07 11:31:42.122824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.274 [2024-10-07 11:31:42.129109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.274 [2024-10-07 11:31:42.129232] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.274 [2024-10-07 11:31:42.129263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.274 [2024-10-07 11:31:42.129281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.274 [2024-10-07 11:31:42.129313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.274 [2024-10-07 11:31:42.129361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.274 [2024-10-07 11:31:42.129380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.274 [2024-10-07 11:31:42.129394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.274 [2024-10-07 11:31:42.129425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.274 [2024-10-07 11:31:42.133151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.274 [2024-10-07 11:31:42.133414] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.274 [2024-10-07 11:31:42.133455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.274 [2024-10-07 11:31:42.133474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.274 [2024-10-07 11:31:42.133583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.274 [2024-10-07 11:31:42.133626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.274 [2024-10-07 11:31:42.133646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.274 [2024-10-07 11:31:42.133661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.274 [2024-10-07 11:31:42.133691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.274 [2024-10-07 11:31:42.140821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.274 [2024-10-07 11:31:42.140936] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.274 [2024-10-07 11:31:42.140967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.274 [2024-10-07 11:31:42.140984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.274 [2024-10-07 11:31:42.141016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.274 [2024-10-07 11:31:42.141066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.274 [2024-10-07 11:31:42.141086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.274 [2024-10-07 11:31:42.141100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.274 [2024-10-07 11:31:42.141131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.274 [2024-10-07 11:31:42.143243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.274 [2024-10-07 11:31:42.143367] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.274 [2024-10-07 11:31:42.143398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.274 [2024-10-07 11:31:42.143415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.274 [2024-10-07 11:31:42.143447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.274 [2024-10-07 11:31:42.143479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.274 [2024-10-07 11:31:42.143497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.274 [2024-10-07 11:31:42.143512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.274 [2024-10-07 11:31:42.143541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.274 [2024-10-07 11:31:42.150916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.274 [2024-10-07 11:31:42.151030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.274 [2024-10-07 11:31:42.151062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.274 [2024-10-07 11:31:42.151079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.274 [2024-10-07 11:31:42.151111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.274 [2024-10-07 11:31:42.151144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.274 [2024-10-07 11:31:42.151162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.274 [2024-10-07 11:31:42.151176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.274 [2024-10-07 11:31:42.151205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.274 [2024-10-07 11:31:42.154832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.274 [2024-10-07 11:31:42.154946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.274 [2024-10-07 11:31:42.154977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.274 [2024-10-07 11:31:42.154995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.274 [2024-10-07 11:31:42.155027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.274 [2024-10-07 11:31:42.155059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.274 [2024-10-07 11:31:42.155077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.274 [2024-10-07 11:31:42.155091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.274 [2024-10-07 11:31:42.155139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.274 [2024-10-07 11:31:42.161501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.274 [2024-10-07 11:31:42.161640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.274 [2024-10-07 11:31:42.161672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.274 [2024-10-07 11:31:42.161690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.274 [2024-10-07 11:31:42.161722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.274 [2024-10-07 11:31:42.161755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.274 [2024-10-07 11:31:42.161773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.275 [2024-10-07 11:31:42.161787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.275 [2024-10-07 11:31:42.161817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.275 [2024-10-07 11:31:42.164921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.275 [2024-10-07 11:31:42.165031] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.275 [2024-10-07 11:31:42.165063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.275 [2024-10-07 11:31:42.165081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.275 [2024-10-07 11:31:42.165112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.275 [2024-10-07 11:31:42.165144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.275 [2024-10-07 11:31:42.165162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.275 [2024-10-07 11:31:42.165176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.275 [2024-10-07 11:31:42.165206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.275 [2024-10-07 11:31:42.171592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.275 [2024-10-07 11:31:42.171706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.275 [2024-10-07 11:31:42.171738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.275 [2024-10-07 11:31:42.171755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.275 [2024-10-07 11:31:42.171787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.275 [2024-10-07 11:31:42.171819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.275 [2024-10-07 11:31:42.171837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.275 [2024-10-07 11:31:42.171851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.275 [2024-10-07 11:31:42.171881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.275 [2024-10-07 11:31:42.175686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.275 [2024-10-07 11:31:42.175807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.275 [2024-10-07 11:31:42.175838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.275 [2024-10-07 11:31:42.175874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.275 [2024-10-07 11:31:42.175908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.275 [2024-10-07 11:31:42.175940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.275 [2024-10-07 11:31:42.175958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.275 [2024-10-07 11:31:42.175972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.275 [2024-10-07 11:31:42.176003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.275 [2024-10-07 11:31:42.183087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.275 [2024-10-07 11:31:42.183240] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.275 [2024-10-07 11:31:42.183273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.275 [2024-10-07 11:31:42.183290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.275 [2024-10-07 11:31:42.183337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.275 [2024-10-07 11:31:42.183373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.275 [2024-10-07 11:31:42.183392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.275 [2024-10-07 11:31:42.183406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.275 [2024-10-07 11:31:42.183436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.275 [2024-10-07 11:31:42.185780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.275 [2024-10-07 11:31:42.185886] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.275 [2024-10-07 11:31:42.185917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.275 [2024-10-07 11:31:42.185935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.275 [2024-10-07 11:31:42.185977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.275 [2024-10-07 11:31:42.186008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.275 [2024-10-07 11:31:42.186027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.275 [2024-10-07 11:31:42.186041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.275 [2024-10-07 11:31:42.186070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.275 [2024-10-07 11:31:42.193237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.275 [2024-10-07 11:31:42.193362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.275 [2024-10-07 11:31:42.193395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.275 [2024-10-07 11:31:42.193412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.275 [2024-10-07 11:31:42.193450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.275 [2024-10-07 11:31:42.193482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.275 [2024-10-07 11:31:42.193517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.275 [2024-10-07 11:31:42.193532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.275 [2024-10-07 11:31:42.193564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.275 [2024-10-07 11:31:42.197404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.275 [2024-10-07 11:31:42.197523] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.275 [2024-10-07 11:31:42.197555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.275 [2024-10-07 11:31:42.197572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.275 [2024-10-07 11:31:42.197615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.275 [2024-10-07 11:31:42.197649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.275 [2024-10-07 11:31:42.197667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.275 [2024-10-07 11:31:42.197681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.275 [2024-10-07 11:31:42.197711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.275 [2024-10-07 11:31:42.204054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.275 [2024-10-07 11:31:42.204177] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.275 [2024-10-07 11:31:42.204208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.275 [2024-10-07 11:31:42.204226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.275 [2024-10-07 11:31:42.204273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.275 [2024-10-07 11:31:42.204308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.275 [2024-10-07 11:31:42.204344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.275 [2024-10-07 11:31:42.204360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.275 [2024-10-07 11:31:42.204391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.275 [2024-10-07 11:31:42.207509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.275 [2024-10-07 11:31:42.207620] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.275 [2024-10-07 11:31:42.207651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.275 [2024-10-07 11:31:42.207669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.275 [2024-10-07 11:31:42.207700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.275 [2024-10-07 11:31:42.207732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.275 [2024-10-07 11:31:42.207751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.275 [2024-10-07 11:31:42.207765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.275 [2024-10-07 11:31:42.207795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.275 [2024-10-07 11:31:42.214146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.275 [2024-10-07 11:31:42.214259] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.275 [2024-10-07 11:31:42.214303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.275 [2024-10-07 11:31:42.214338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.275 [2024-10-07 11:31:42.214373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.275 [2024-10-07 11:31:42.214406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.275 [2024-10-07 11:31:42.214424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.275 [2024-10-07 11:31:42.214438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.275 [2024-10-07 11:31:42.214467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.275 [2024-10-07 11:31:42.218233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.275 [2024-10-07 11:31:42.218374] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.275 [2024-10-07 11:31:42.218407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.275 [2024-10-07 11:31:42.218424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.276 [2024-10-07 11:31:42.218457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.276 [2024-10-07 11:31:42.218489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.276 [2024-10-07 11:31:42.218508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.276 [2024-10-07 11:31:42.218522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.276 [2024-10-07 11:31:42.218552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.276 [2024-10-07 11:31:42.225626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.276 [2024-10-07 11:31:42.225777] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.276 [2024-10-07 11:31:42.225810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.276 [2024-10-07 11:31:42.225827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.276 [2024-10-07 11:31:42.225860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.276 [2024-10-07 11:31:42.225893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.276 [2024-10-07 11:31:42.225911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.276 [2024-10-07 11:31:42.225928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.276 [2024-10-07 11:31:42.225958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.276 [2024-10-07 11:31:42.228345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.276 [2024-10-07 11:31:42.228455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.276 [2024-10-07 11:31:42.228486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.276 [2024-10-07 11:31:42.228503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.276 [2024-10-07 11:31:42.228553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.276 [2024-10-07 11:31:42.228585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.276 [2024-10-07 11:31:42.228603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.276 [2024-10-07 11:31:42.228617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.276 [2024-10-07 11:31:42.228648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.276 [2024-10-07 11:31:42.235754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.276 [2024-10-07 11:31:42.235873] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.276 [2024-10-07 11:31:42.235905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.276 [2024-10-07 11:31:42.235922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.276 [2024-10-07 11:31:42.235954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.276 [2024-10-07 11:31:42.235987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.276 [2024-10-07 11:31:42.236005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.276 [2024-10-07 11:31:42.236019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.276 [2024-10-07 11:31:42.236049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.276 [2024-10-07 11:31:42.239829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.276 [2024-10-07 11:31:42.239940] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.276 [2024-10-07 11:31:42.239971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.276 [2024-10-07 11:31:42.239988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.276 [2024-10-07 11:31:42.240020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.276 [2024-10-07 11:31:42.240052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.276 [2024-10-07 11:31:42.240071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.276 [2024-10-07 11:31:42.240085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.276 [2024-10-07 11:31:42.240115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.276 [2024-10-07 11:31:42.246492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.276 [2024-10-07 11:31:42.246613] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.276 [2024-10-07 11:31:42.246644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.276 [2024-10-07 11:31:42.246661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.276 [2024-10-07 11:31:42.246693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.276 [2024-10-07 11:31:42.246726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.276 [2024-10-07 11:31:42.246743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.276 [2024-10-07 11:31:42.246775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.276 [2024-10-07 11:31:42.246808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.276 [2024-10-07 11:31:42.249919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.276 [2024-10-07 11:31:42.250030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.276 [2024-10-07 11:31:42.250060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.276 [2024-10-07 11:31:42.250078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.276 [2024-10-07 11:31:42.250109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.276 [2024-10-07 11:31:42.250141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.276 [2024-10-07 11:31:42.250160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.276 [2024-10-07 11:31:42.250174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.276 [2024-10-07 11:31:42.250204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.276 [2024-10-07 11:31:42.256596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.276 [2024-10-07 11:31:42.256706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.276 [2024-10-07 11:31:42.256737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.276 [2024-10-07 11:31:42.256754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.276 [2024-10-07 11:31:42.256786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.276 [2024-10-07 11:31:42.256818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.276 [2024-10-07 11:31:42.256836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.276 [2024-10-07 11:31:42.256850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.276 [2024-10-07 11:31:42.256881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.276 [2024-10-07 11:31:42.260627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.276 [2024-10-07 11:31:42.260763] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.276 [2024-10-07 11:31:42.260795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.276 [2024-10-07 11:31:42.260812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.276 [2024-10-07 11:31:42.260844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.276 [2024-10-07 11:31:42.260876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.276 [2024-10-07 11:31:42.260894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.276 [2024-10-07 11:31:42.260908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.276 [2024-10-07 11:31:42.260937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.276 [2024-10-07 11:31:42.268007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.276 [2024-10-07 11:31:42.268129] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.276 [2024-10-07 11:31:42.268167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.276 [2024-10-07 11:31:42.268185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.276 [2024-10-07 11:31:42.268217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.276 [2024-10-07 11:31:42.268250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.276 [2024-10-07 11:31:42.268268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.276 [2024-10-07 11:31:42.268282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.276 [2024-10-07 11:31:42.268312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.276 [2024-10-07 11:31:42.270716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.276 [2024-10-07 11:31:42.270827] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.276 [2024-10-07 11:31:42.270858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.276 [2024-10-07 11:31:42.270875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.276 [2024-10-07 11:31:42.270907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.276 [2024-10-07 11:31:42.270939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.276 [2024-10-07 11:31:42.270957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.276 [2024-10-07 11:31:42.270971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.276 [2024-10-07 11:31:42.271155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.276 [2024-10-07 11:31:42.278103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.276 [2024-10-07 11:31:42.278215] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.276 [2024-10-07 11:31:42.278247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.276 [2024-10-07 11:31:42.278264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.276 [2024-10-07 11:31:42.278307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.276 [2024-10-07 11:31:42.278359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.276 [2024-10-07 11:31:42.278378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.276 [2024-10-07 11:31:42.278392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.276 [2024-10-07 11:31:42.278421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.276 [2024-10-07 11:31:42.282092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.276 [2024-10-07 11:31:42.282202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.277 [2024-10-07 11:31:42.282233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.277 [2024-10-07 11:31:42.282250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.277 [2024-10-07 11:31:42.282282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.277 [2024-10-07 11:31:42.282362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.277 [2024-10-07 11:31:42.282382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.277 [2024-10-07 11:31:42.282397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.277 [2024-10-07 11:31:42.282427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.277 [2024-10-07 11:31:42.288753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.277 [2024-10-07 11:31:42.288873] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.277 [2024-10-07 11:31:42.288905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.277 [2024-10-07 11:31:42.288923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.277 [2024-10-07 11:31:42.288955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.277 [2024-10-07 11:31:42.288987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.277 [2024-10-07 11:31:42.289005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.277 [2024-10-07 11:31:42.289019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.277 [2024-10-07 11:31:42.289049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.277 [2024-10-07 11:31:42.292177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.277 [2024-10-07 11:31:42.292289] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.277 [2024-10-07 11:31:42.292332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.277 [2024-10-07 11:31:42.292352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.277 [2024-10-07 11:31:42.292384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.277 [2024-10-07 11:31:42.292417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.277 [2024-10-07 11:31:42.292435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.277 [2024-10-07 11:31:42.292449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.277 [2024-10-07 11:31:42.292479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.277 [2024-10-07 11:31:42.298843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.277 [2024-10-07 11:31:42.298954] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.277 [2024-10-07 11:31:42.298985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.277 [2024-10-07 11:31:42.299002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.277 [2024-10-07 11:31:42.299034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.277 [2024-10-07 11:31:42.299066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.277 [2024-10-07 11:31:42.299084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.277 [2024-10-07 11:31:42.299099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.277 [2024-10-07 11:31:42.299301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.277 [2024-10-07 11:31:42.302846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.277 [2024-10-07 11:31:42.302965] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.277 [2024-10-07 11:31:42.302996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.277 [2024-10-07 11:31:42.303014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.277 [2024-10-07 11:31:42.303045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.277 [2024-10-07 11:31:42.303077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.277 [2024-10-07 11:31:42.303095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.277 [2024-10-07 11:31:42.303109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.277 [2024-10-07 11:31:42.303139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.277 [2024-10-07 11:31:42.310229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.277 [2024-10-07 11:31:42.310363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.277 [2024-10-07 11:31:42.310396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.277 [2024-10-07 11:31:42.310413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.277 [2024-10-07 11:31:42.310446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.277 [2024-10-07 11:31:42.310479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.277 [2024-10-07 11:31:42.310496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.277 [2024-10-07 11:31:42.310510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.277 [2024-10-07 11:31:42.310541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.277 [2024-10-07 11:31:42.312955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.277 [2024-10-07 11:31:42.313064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.277 [2024-10-07 11:31:42.313095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.277 [2024-10-07 11:31:42.313112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.277 [2024-10-07 11:31:42.313143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.277 [2024-10-07 11:31:42.313175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.277 [2024-10-07 11:31:42.313193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.277 [2024-10-07 11:31:42.313207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.277 [2024-10-07 11:31:42.313405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.277 [2024-10-07 11:31:42.320329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.277 [2024-10-07 11:31:42.320442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.277 [2024-10-07 11:31:42.320473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.277 [2024-10-07 11:31:42.320507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.277 [2024-10-07 11:31:42.320541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.277 [2024-10-07 11:31:42.320573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.277 [2024-10-07 11:31:42.320591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.277 [2024-10-07 11:31:42.320605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.277 [2024-10-07 11:31:42.320635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.277 [2024-10-07 11:31:42.324339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.277 [2024-10-07 11:31:42.324461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.277 [2024-10-07 11:31:42.324492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.277 [2024-10-07 11:31:42.324509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.277 [2024-10-07 11:31:42.324541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.277 [2024-10-07 11:31:42.324572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.277 [2024-10-07 11:31:42.324590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.277 [2024-10-07 11:31:42.324604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.277 [2024-10-07 11:31:42.324634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.277 [2024-10-07 11:31:42.330967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.277 [2024-10-07 11:31:42.331088] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.277 [2024-10-07 11:31:42.331119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.277 [2024-10-07 11:31:42.331136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.277 [2024-10-07 11:31:42.331169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.277 [2024-10-07 11:31:42.331200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.277 [2024-10-07 11:31:42.331218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.277 [2024-10-07 11:31:42.331233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.277 [2024-10-07 11:31:42.331263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.277 [2024-10-07 11:31:42.334430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.277 [2024-10-07 11:31:42.334540] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.277 [2024-10-07 11:31:42.334570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.277 [2024-10-07 11:31:42.334588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.277 [2024-10-07 11:31:42.334619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.277 [2024-10-07 11:31:42.334651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.277 [2024-10-07 11:31:42.334685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.277 [2024-10-07 11:31:42.334701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.277 [2024-10-07 11:31:42.334732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.277 [2024-10-07 11:31:42.341055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.277 [2024-10-07 11:31:42.341177] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.277 [2024-10-07 11:31:42.341208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.277 [2024-10-07 11:31:42.341225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.277 [2024-10-07 11:31:42.341256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.277 [2024-10-07 11:31:42.341460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.277 [2024-10-07 11:31:42.341488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.277 [2024-10-07 11:31:42.341503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.277 [2024-10-07 11:31:42.341636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.278 [2024-10-07 11:31:42.344974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.278 [2024-10-07 11:31:42.345093] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.278 [2024-10-07 11:31:42.345124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.278 [2024-10-07 11:31:42.345141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.278 [2024-10-07 11:31:42.345173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.278 [2024-10-07 11:31:42.345205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.278 [2024-10-07 11:31:42.345223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.278 [2024-10-07 11:31:42.345237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.278 [2024-10-07 11:31:42.345267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.278 [2024-10-07 11:31:42.352362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.278 [2024-10-07 11:31:42.352474] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.278 [2024-10-07 11:31:42.352505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.278 [2024-10-07 11:31:42.352523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.278 [2024-10-07 11:31:42.352555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.278 [2024-10-07 11:31:42.352587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.278 [2024-10-07 11:31:42.352605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.278 [2024-10-07 11:31:42.352620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.278 [2024-10-07 11:31:42.352650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.278 [2024-10-07 11:31:42.355066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.278 [2024-10-07 11:31:42.355178] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.278 [2024-10-07 11:31:42.355209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.278 [2024-10-07 11:31:42.355226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.278 [2024-10-07 11:31:42.355441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.278 [2024-10-07 11:31:42.355584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.278 [2024-10-07 11:31:42.355619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.278 [2024-10-07 11:31:42.355637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.278 [2024-10-07 11:31:42.355754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.278 [2024-10-07 11:31:42.362855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.278 [2024-10-07 11:31:42.364154] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.278 [2024-10-07 11:31:42.364201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.278 [2024-10-07 11:31:42.364225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.278 [2024-10-07 11:31:42.364473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.278 [2024-10-07 11:31:42.365288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.278 [2024-10-07 11:31:42.365335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.278 [2024-10-07 11:31:42.365356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.278 [2024-10-07 11:31:42.366614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.278 [2024-10-07 11:31:42.366782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.278 [2024-10-07 11:31:42.366880] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.278 [2024-10-07 11:31:42.366911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.278 [2024-10-07 11:31:42.366928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.278 [2024-10-07 11:31:42.366961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.278 [2024-10-07 11:31:42.366993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.278 [2024-10-07 11:31:42.367013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.278 [2024-10-07 11:31:42.367027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.278 [2024-10-07 11:31:42.367057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.278 [2024-10-07 11:31:42.373139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.278 [2024-10-07 11:31:42.373296] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.278 [2024-10-07 11:31:42.373357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.278 [2024-10-07 11:31:42.373385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.278 [2024-10-07 11:31:42.373456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.278 [2024-10-07 11:31:42.373503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.278 [2024-10-07 11:31:42.373530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.278 [2024-10-07 11:31:42.373554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.278 [2024-10-07 11:31:42.375015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.278 [2024-10-07 11:31:42.376878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.278 [2024-10-07 11:31:42.378496] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.278 [2024-10-07 11:31:42.378561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.278 [2024-10-07 11:31:42.378593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.278 [2024-10-07 11:31:42.378887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.278 [2024-10-07 11:31:42.379896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.278 [2024-10-07 11:31:42.379946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.278 [2024-10-07 11:31:42.379967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.278 [2024-10-07 11:31:42.381166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.278 [2024-10-07 11:31:42.383253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.278 [2024-10-07 11:31:42.383541] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.278 [2024-10-07 11:31:42.383587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.278 [2024-10-07 11:31:42.383607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.278 [2024-10-07 11:31:42.383741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.278 [2024-10-07 11:31:42.383867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.278 [2024-10-07 11:31:42.383902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.278 [2024-10-07 11:31:42.383919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.278 [2024-10-07 11:31:42.383979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.278 [2024-10-07 11:31:42.387002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.278 [2024-10-07 11:31:42.387117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.278 [2024-10-07 11:31:42.387149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.278 [2024-10-07 11:31:42.387167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.278 [2024-10-07 11:31:42.387199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.278 [2024-10-07 11:31:42.387230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.278 [2024-10-07 11:31:42.387249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.278 [2024-10-07 11:31:42.387278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.278 [2024-10-07 11:31:42.387311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.278 [2024-10-07 11:31:42.394366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.278 [2024-10-07 11:31:42.394482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.278 [2024-10-07 11:31:42.394515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.278 [2024-10-07 11:31:42.394532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.278 [2024-10-07 11:31:42.394564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.278 [2024-10-07 11:31:42.394597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.278 [2024-10-07 11:31:42.394615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.278 [2024-10-07 11:31:42.394631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.278 [2024-10-07 11:31:42.394672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.278 [2024-10-07 11:31:42.397094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.278 [2024-10-07 11:31:42.397204] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.278 [2024-10-07 11:31:42.397235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.278 [2024-10-07 11:31:42.397251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.278 [2024-10-07 11:31:42.397455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.278 [2024-10-07 11:31:42.397598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.278 [2024-10-07 11:31:42.397634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.278 [2024-10-07 11:31:42.397652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.278 [2024-10-07 11:31:42.397782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.278 [2024-10-07 11:31:42.404455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.278 [2024-10-07 11:31:42.404568] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.278 [2024-10-07 11:31:42.404599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.278 [2024-10-07 11:31:42.404617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.278 [2024-10-07 11:31:42.404649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.278 [2024-10-07 11:31:42.404681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.279 [2024-10-07 11:31:42.404699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.279 [2024-10-07 11:31:42.404714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.279 [2024-10-07 11:31:42.404743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.279 [2024-10-07 11:31:42.408429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.279 [2024-10-07 11:31:42.408557] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.279 [2024-10-07 11:31:42.408590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.279 [2024-10-07 11:31:42.408607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.279 [2024-10-07 11:31:42.408640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.279 [2024-10-07 11:31:42.408672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.279 [2024-10-07 11:31:42.408690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.279 [2024-10-07 11:31:42.408705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.279 [2024-10-07 11:31:42.408734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.279 [2024-10-07 11:31:42.415075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.279 [2024-10-07 11:31:42.415197] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.279 [2024-10-07 11:31:42.415228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.279 [2024-10-07 11:31:42.415246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.279 [2024-10-07 11:31:42.415278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.279 [2024-10-07 11:31:42.415310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.279 [2024-10-07 11:31:42.415346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.279 [2024-10-07 11:31:42.415362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.279 [2024-10-07 11:31:42.415393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.279 [2024-10-07 11:31:42.418533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.279 [2024-10-07 11:31:42.418646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.279 [2024-10-07 11:31:42.418677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.279 [2024-10-07 11:31:42.418693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.279 [2024-10-07 11:31:42.418725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.279 [2024-10-07 11:31:42.418757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.279 [2024-10-07 11:31:42.418775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.279 [2024-10-07 11:31:42.418790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.279 [2024-10-07 11:31:42.418820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.279 [2024-10-07 11:31:42.425164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.279 [2024-10-07 11:31:42.425279] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.279 [2024-10-07 11:31:42.425310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.279 [2024-10-07 11:31:42.425346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.279 [2024-10-07 11:31:42.425379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.279 [2024-10-07 11:31:42.425431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.279 [2024-10-07 11:31:42.425450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.279 [2024-10-07 11:31:42.425465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.279 [2024-10-07 11:31:42.425649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.279 [2024-10-07 11:31:42.429133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.279 [2024-10-07 11:31:42.429253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.279 [2024-10-07 11:31:42.429285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.279 [2024-10-07 11:31:42.429302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.279 [2024-10-07 11:31:42.429350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.279 [2024-10-07 11:31:42.429385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.279 [2024-10-07 11:31:42.429403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.279 [2024-10-07 11:31:42.429417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.279 [2024-10-07 11:31:42.429446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.279 [2024-10-07 11:31:42.436570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.279 [2024-10-07 11:31:42.436686] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.279 [2024-10-07 11:31:42.436717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.279 [2024-10-07 11:31:42.436734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.279 [2024-10-07 11:31:42.436766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.279 [2024-10-07 11:31:42.436798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.279 [2024-10-07 11:31:42.436816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.279 [2024-10-07 11:31:42.436830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.279 [2024-10-07 11:31:42.436860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.279 [2024-10-07 11:31:42.439225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.279 [2024-10-07 11:31:42.439350] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.279 [2024-10-07 11:31:42.439382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.279 [2024-10-07 11:31:42.439399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.279 [2024-10-07 11:31:42.439431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.279 [2024-10-07 11:31:42.439464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.279 [2024-10-07 11:31:42.439482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.279 [2024-10-07 11:31:42.439496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.279 [2024-10-07 11:31:42.439697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.279 [2024-10-07 11:31:42.446664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.279 [2024-10-07 11:31:42.446783] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.279 [2024-10-07 11:31:42.446815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.279 [2024-10-07 11:31:42.446833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.279 [2024-10-07 11:31:42.446865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.279 [2024-10-07 11:31:42.446898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.279 [2024-10-07 11:31:42.446916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.279 [2024-10-07 11:31:42.446930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.279 [2024-10-07 11:31:42.446960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.279 [2024-10-07 11:31:42.450652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.279 [2024-10-07 11:31:42.450760] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.279 [2024-10-07 11:31:42.450791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.279 [2024-10-07 11:31:42.450808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.279 [2024-10-07 11:31:42.450840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.279 [2024-10-07 11:31:42.450872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.279 [2024-10-07 11:31:42.450891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.279 [2024-10-07 11:31:42.450905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.279 [2024-10-07 11:31:42.450936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.279 [2024-10-07 11:31:42.457223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.279 [2024-10-07 11:31:42.457353] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.279 [2024-10-07 11:31:42.457385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.279 [2024-10-07 11:31:42.457403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.279 [2024-10-07 11:31:42.457452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.279 [2024-10-07 11:31:42.457488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.279 [2024-10-07 11:31:42.457507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.279 [2024-10-07 11:31:42.457521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.280 [2024-10-07 11:31:42.457551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.280 [2024-10-07 11:31:42.460739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.280 [2024-10-07 11:31:42.460852] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.280 [2024-10-07 11:31:42.460882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.280 [2024-10-07 11:31:42.460917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.280 [2024-10-07 11:31:42.460951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.280 [2024-10-07 11:31:42.460984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.280 [2024-10-07 11:31:42.461002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.280 [2024-10-07 11:31:42.461016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.280 [2024-10-07 11:31:42.461046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.280 [2024-10-07 11:31:42.467312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.280 [2024-10-07 11:31:42.467435] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.280 [2024-10-07 11:31:42.467466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.280 [2024-10-07 11:31:42.467484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.280 [2024-10-07 11:31:42.467517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.280 [2024-10-07 11:31:42.467549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.280 [2024-10-07 11:31:42.467567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.280 [2024-10-07 11:31:42.467581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.280 [2024-10-07 11:31:42.467611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.280 [2024-10-07 11:31:42.471407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.280 [2024-10-07 11:31:42.471529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.280 [2024-10-07 11:31:42.471560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.280 [2024-10-07 11:31:42.471578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.280 [2024-10-07 11:31:42.471610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.280 [2024-10-07 11:31:42.471643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.280 [2024-10-07 11:31:42.471661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.280 [2024-10-07 11:31:42.471676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.280 [2024-10-07 11:31:42.471706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.280 [2024-10-07 11:31:42.479258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.280 [2024-10-07 11:31:42.479441] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.280 [2024-10-07 11:31:42.479474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.280 [2024-10-07 11:31:42.479492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.280 [2024-10-07 11:31:42.479525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.280 [2024-10-07 11:31:42.479559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.280 [2024-10-07 11:31:42.479598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.280 [2024-10-07 11:31:42.479614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.280 [2024-10-07 11:31:42.479647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.280 [2024-10-07 11:31:42.481617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.280 [2024-10-07 11:31:42.481728] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.280 [2024-10-07 11:31:42.481759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.280 [2024-10-07 11:31:42.481776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.280 [2024-10-07 11:31:42.481809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.280 [2024-10-07 11:31:42.481842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.280 [2024-10-07 11:31:42.481860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.280 [2024-10-07 11:31:42.481874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.280 [2024-10-07 11:31:42.481904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.280 [2024-10-07 11:31:42.489480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.280 [2024-10-07 11:31:42.489599] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.280 [2024-10-07 11:31:42.489630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.280 [2024-10-07 11:31:42.489648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.280 [2024-10-07 11:31:42.489680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.280 [2024-10-07 11:31:42.489713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.280 [2024-10-07 11:31:42.489731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.280 [2024-10-07 11:31:42.489745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.280 [2024-10-07 11:31:42.489775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.280 [2024-10-07 11:31:42.493612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.280 [2024-10-07 11:31:42.493723] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.280 [2024-10-07 11:31:42.493754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.280 [2024-10-07 11:31:42.493772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.280 [2024-10-07 11:31:42.493805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.280 [2024-10-07 11:31:42.493837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.280 [2024-10-07 11:31:42.493855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.280 [2024-10-07 11:31:42.493870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.280 [2024-10-07 11:31:42.493900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.280 8696.33 IOPS, 33.97 MiB/s [2024-10-07T11:31:52.803Z] [2024-10-07 11:31:42.500825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.280 [2024-10-07 11:31:42.502124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.280 [2024-10-07 11:31:42.502168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.280 [2024-10-07 11:31:42.502189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.280 [2024-10-07 11:31:42.502423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.280 [2024-10-07 11:31:42.502475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.280 [2024-10-07 11:31:42.502495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.280 [2024-10-07 11:31:42.502511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.280 [2024-10-07 11:31:42.502543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.280 [2024-10-07 11:31:42.503703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.280 [2024-10-07 11:31:42.503812] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.280 [2024-10-07 11:31:42.503843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.280 [2024-10-07 11:31:42.503860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.280 [2024-10-07 11:31:42.503892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.280 [2024-10-07 11:31:42.503924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.280 [2024-10-07 11:31:42.503942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.280 [2024-10-07 11:31:42.503956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.280 [2024-10-07 11:31:42.503986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.280 [2024-10-07 11:31:42.511185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.280 [2024-10-07 11:31:42.511339] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.280 [2024-10-07 11:31:42.511371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.280 [2024-10-07 11:31:42.511389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.280 [2024-10-07 11:31:42.511422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.280 [2024-10-07 11:31:42.511455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.280 [2024-10-07 11:31:42.511473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.280 [2024-10-07 11:31:42.511487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.280 [2024-10-07 11:31:42.511517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.280 [2024-10-07 11:31:42.514574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.280 [2024-10-07 11:31:42.514693] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.280 [2024-10-07 11:31:42.514723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.280 [2024-10-07 11:31:42.514759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.280 [2024-10-07 11:31:42.514795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.280 [2024-10-07 11:31:42.514828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.280 [2024-10-07 11:31:42.514846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.280 [2024-10-07 11:31:42.514861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.280 [2024-10-07 11:31:42.514891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.280 [2024-10-07 11:31:42.521968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.280 [2024-10-07 11:31:42.522080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.280 [2024-10-07 11:31:42.522112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.280 [2024-10-07 11:31:42.522130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.281 [2024-10-07 11:31:42.522162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.281 [2024-10-07 11:31:42.522195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.281 [2024-10-07 11:31:42.522213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.281 [2024-10-07 11:31:42.522227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.281 [2024-10-07 11:31:42.522257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.281 [2024-10-07 11:31:42.524667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.281 [2024-10-07 11:31:42.524774] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.281 [2024-10-07 11:31:42.524805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.281 [2024-10-07 11:31:42.524823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.281 [2024-10-07 11:31:42.524855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.281 [2024-10-07 11:31:42.524887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.281 [2024-10-07 11:31:42.524905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.281 [2024-10-07 11:31:42.524919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.281 [2024-10-07 11:31:42.525103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.281 [2024-10-07 11:31:42.532057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.281 [2024-10-07 11:31:42.532168] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.281 [2024-10-07 11:31:42.532199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.281 [2024-10-07 11:31:42.532216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.281 [2024-10-07 11:31:42.532248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.281 [2024-10-07 11:31:42.532280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.281 [2024-10-07 11:31:42.532298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.281 [2024-10-07 11:31:42.532347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.281 [2024-10-07 11:31:42.532381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.281 [2024-10-07 11:31:42.536084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.281 [2024-10-07 11:31:42.536233] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.281 [2024-10-07 11:31:42.536265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.281 [2024-10-07 11:31:42.536283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.281 [2024-10-07 11:31:42.536329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.281 [2024-10-07 11:31:42.536366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.281 [2024-10-07 11:31:42.536385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.281 [2024-10-07 11:31:42.536400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.281 [2024-10-07 11:31:42.536431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.281 [2024-10-07 11:31:42.542847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.281 [2024-10-07 11:31:42.542968] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.281 [2024-10-07 11:31:42.542999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.281 [2024-10-07 11:31:42.543017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.281 [2024-10-07 11:31:42.543049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.281 [2024-10-07 11:31:42.543080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.281 [2024-10-07 11:31:42.543098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.281 [2024-10-07 11:31:42.543112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.281 [2024-10-07 11:31:42.543142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.281 [2024-10-07 11:31:42.546208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.281 [2024-10-07 11:31:42.546352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.281 [2024-10-07 11:31:42.546385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.281 [2024-10-07 11:31:42.546403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.281 [2024-10-07 11:31:42.546436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.281 [2024-10-07 11:31:42.546468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.281 [2024-10-07 11:31:42.546486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.281 [2024-10-07 11:31:42.546501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.281 [2024-10-07 11:31:42.546531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.281 [2024-10-07 11:31:42.552936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.281 [2024-10-07 11:31:42.553068] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.281 [2024-10-07 11:31:42.553099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.281 [2024-10-07 11:31:42.553117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.281 [2024-10-07 11:31:42.553164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.281 [2024-10-07 11:31:42.553199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.281 [2024-10-07 11:31:42.553217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.281 [2024-10-07 11:31:42.553231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.281 [2024-10-07 11:31:42.553262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.281 [2024-10-07 11:31:42.556984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.281 [2024-10-07 11:31:42.557104] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.281 [2024-10-07 11:31:42.557136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.281 [2024-10-07 11:31:42.557153] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.281 [2024-10-07 11:31:42.557200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.281 [2024-10-07 11:31:42.557235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.281 [2024-10-07 11:31:42.557254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.281 [2024-10-07 11:31:42.557268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.281 [2024-10-07 11:31:42.557297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.281 [2024-10-07 11:31:42.564449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.281 [2024-10-07 11:31:42.564562] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.281 [2024-10-07 11:31:42.564594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.281 [2024-10-07 11:31:42.564611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.281 [2024-10-07 11:31:42.564644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.281 [2024-10-07 11:31:42.564680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.281 [2024-10-07 11:31:42.564698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.281 [2024-10-07 11:31:42.564712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.281 [2024-10-07 11:31:42.564743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.281 [2024-10-07 11:31:42.567071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.281 [2024-10-07 11:31:42.567182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.281 [2024-10-07 11:31:42.567213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.281 [2024-10-07 11:31:42.567230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.281 [2024-10-07 11:31:42.567278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.281 [2024-10-07 11:31:42.567312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.281 [2024-10-07 11:31:42.567346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.281 [2024-10-07 11:31:42.567361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.281 [2024-10-07 11:31:42.567392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.281 [2024-10-07 11:31:42.574543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.281 [2024-10-07 11:31:42.574655] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.281 [2024-10-07 11:31:42.574687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.281 [2024-10-07 11:31:42.574704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.281 [2024-10-07 11:31:42.574735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.281 [2024-10-07 11:31:42.574767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.281 [2024-10-07 11:31:42.574785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.281 [2024-10-07 11:31:42.574806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.281 [2024-10-07 11:31:42.574836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.281 [2024-10-07 11:31:42.578582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.281 [2024-10-07 11:31:42.578696] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.281 [2024-10-07 11:31:42.578726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.281 [2024-10-07 11:31:42.578744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.281 [2024-10-07 11:31:42.578775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.281 [2024-10-07 11:31:42.578808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.281 [2024-10-07 11:31:42.578826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.281 [2024-10-07 11:31:42.578841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.281 [2024-10-07 11:31:42.578870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.281 [2024-10-07 11:31:42.585209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.281 [2024-10-07 11:31:42.585345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.281 [2024-10-07 11:31:42.585377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.281 [2024-10-07 11:31:42.585394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.281 [2024-10-07 11:31:42.585427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.281 [2024-10-07 11:31:42.585460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.281 [2024-10-07 11:31:42.585478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.281 [2024-10-07 11:31:42.585493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.282 [2024-10-07 11:31:42.585540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.282 [2024-10-07 11:31:42.588674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.282 [2024-10-07 11:31:42.588784] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.282 [2024-10-07 11:31:42.588815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.282 [2024-10-07 11:31:42.588832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.282 [2024-10-07 11:31:42.588864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.282 [2024-10-07 11:31:42.588896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.282 [2024-10-07 11:31:42.588914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.282 [2024-10-07 11:31:42.588929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.282 [2024-10-07 11:31:42.588958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.282 [2024-10-07 11:31:42.595302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.282 [2024-10-07 11:31:42.595427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.282 [2024-10-07 11:31:42.595458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.282 [2024-10-07 11:31:42.595475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.282 [2024-10-07 11:31:42.595507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.282 [2024-10-07 11:31:42.595539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.282 [2024-10-07 11:31:42.595556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.282 [2024-10-07 11:31:42.595571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.282 [2024-10-07 11:31:42.595755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.282 [2024-10-07 11:31:42.599276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.282 [2024-10-07 11:31:42.599416] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.282 [2024-10-07 11:31:42.599449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.282 [2024-10-07 11:31:42.599466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.282 [2024-10-07 11:31:42.599499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.282 [2024-10-07 11:31:42.599533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.282 [2024-10-07 11:31:42.599552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.282 [2024-10-07 11:31:42.599567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.282 [2024-10-07 11:31:42.599597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.282 [2024-10-07 11:31:42.606708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.282 [2024-10-07 11:31:42.606823] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.282 [2024-10-07 11:31:42.606870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.282 [2024-10-07 11:31:42.606889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.282 [2024-10-07 11:31:42.606921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.282 [2024-10-07 11:31:42.606954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.282 [2024-10-07 11:31:42.606972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.282 [2024-10-07 11:31:42.606986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.282 [2024-10-07 11:31:42.607016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.282 [2024-10-07 11:31:42.609392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.282 [2024-10-07 11:31:42.609499] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.282 [2024-10-07 11:31:42.609529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.282 [2024-10-07 11:31:42.609546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.282 [2024-10-07 11:31:42.609578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.282 [2024-10-07 11:31:42.609610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.282 [2024-10-07 11:31:42.609628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.282 [2024-10-07 11:31:42.609643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.282 [2024-10-07 11:31:42.609827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.282 [2024-10-07 11:31:42.616802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.282 [2024-10-07 11:31:42.616914] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.282 [2024-10-07 11:31:42.616945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.282 [2024-10-07 11:31:42.616962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.282 [2024-10-07 11:31:42.616994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.282 [2024-10-07 11:31:42.617026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.282 [2024-10-07 11:31:42.617044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.282 [2024-10-07 11:31:42.617059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.282 [2024-10-07 11:31:42.617088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.282 [2024-10-07 11:31:42.620760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.282 [2024-10-07 11:31:42.620872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.282 [2024-10-07 11:31:42.620903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.282 [2024-10-07 11:31:42.620920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.282 [2024-10-07 11:31:42.620953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.282 [2024-10-07 11:31:42.621002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.282 [2024-10-07 11:31:42.621021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.282 [2024-10-07 11:31:42.621036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.282 [2024-10-07 11:31:42.621066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.282 [2024-10-07 11:31:42.627382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.282 [2024-10-07 11:31:42.627503] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.282 [2024-10-07 11:31:42.627534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.282 [2024-10-07 11:31:42.627552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.282 [2024-10-07 11:31:42.627584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.282 [2024-10-07 11:31:42.627617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.282 [2024-10-07 11:31:42.627634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.282 [2024-10-07 11:31:42.627648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.282 [2024-10-07 11:31:42.627678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.282 [2024-10-07 11:31:42.630852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.282 [2024-10-07 11:31:42.630964] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.282 [2024-10-07 11:31:42.630995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.282 [2024-10-07 11:31:42.631012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.282 [2024-10-07 11:31:42.631045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.282 [2024-10-07 11:31:42.631076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.282 [2024-10-07 11:31:42.631094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.282 [2024-10-07 11:31:42.631109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.282 [2024-10-07 11:31:42.631138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.282 [2024-10-07 11:31:42.637473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.282 [2024-10-07 11:31:42.637585] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.282 [2024-10-07 11:31:42.637616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.282 [2024-10-07 11:31:42.637634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.282 [2024-10-07 11:31:42.637665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.282 [2024-10-07 11:31:42.637856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.282 [2024-10-07 11:31:42.637881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.282 [2024-10-07 11:31:42.637896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.282 [2024-10-07 11:31:42.638028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.282 [2024-10-07 11:31:42.641382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.282 [2024-10-07 11:31:42.641502] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.282 [2024-10-07 11:31:42.641533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.282 [2024-10-07 11:31:42.641551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.282 [2024-10-07 11:31:42.641584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.282 [2024-10-07 11:31:42.641618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.282 [2024-10-07 11:31:42.641636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.282 [2024-10-07 11:31:42.641650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.282 [2024-10-07 11:31:42.641679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.282 [2024-10-07 11:31:42.648784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.282 [2024-10-07 11:31:42.648904] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.282 [2024-10-07 11:31:42.648936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.282 [2024-10-07 11:31:42.648953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.282 [2024-10-07 11:31:42.648985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.282 [2024-10-07 11:31:42.649017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.282 [2024-10-07 11:31:42.649035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.282 [2024-10-07 11:31:42.649049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.282 [2024-10-07 11:31:42.649080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.282 [2024-10-07 11:31:42.651474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.282 [2024-10-07 11:31:42.651581] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.283 [2024-10-07 11:31:42.651611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.283 [2024-10-07 11:31:42.651629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.283 [2024-10-07 11:31:42.651660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.283 [2024-10-07 11:31:42.651692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.283 [2024-10-07 11:31:42.651710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.283 [2024-10-07 11:31:42.651725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.283 [2024-10-07 11:31:42.651908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.283 [2024-10-07 11:31:42.658875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.283 [2024-10-07 11:31:42.658987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.283 [2024-10-07 11:31:42.659018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.283 [2024-10-07 11:31:42.659052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.283 [2024-10-07 11:31:42.659086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.283 [2024-10-07 11:31:42.659118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.283 [2024-10-07 11:31:42.659136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.283 [2024-10-07 11:31:42.659150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.283 [2024-10-07 11:31:42.659180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.283 [2024-10-07 11:31:42.662820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.283 [2024-10-07 11:31:42.662933] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.283 [2024-10-07 11:31:42.662964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.283 [2024-10-07 11:31:42.662981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.283 [2024-10-07 11:31:42.663013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.283 [2024-10-07 11:31:42.663046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.283 [2024-10-07 11:31:42.663065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.283 [2024-10-07 11:31:42.663079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.283 [2024-10-07 11:31:42.663109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.283 [2024-10-07 11:31:42.669424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.283 [2024-10-07 11:31:42.669544] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.283 [2024-10-07 11:31:42.669576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.283 [2024-10-07 11:31:42.669594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.283 [2024-10-07 11:31:42.669641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.283 [2024-10-07 11:31:42.669677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.283 [2024-10-07 11:31:42.669695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.283 [2024-10-07 11:31:42.669709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.283 [2024-10-07 11:31:42.669739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.283 [2024-10-07 11:31:42.672910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.283 [2024-10-07 11:31:42.673020] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.283 [2024-10-07 11:31:42.673052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.283 [2024-10-07 11:31:42.673069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.283 [2024-10-07 11:31:42.673100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.283 [2024-10-07 11:31:42.673133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.283 [2024-10-07 11:31:42.673151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.283 [2024-10-07 11:31:42.673182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.283 [2024-10-07 11:31:42.673213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.283 [2024-10-07 11:31:42.679512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.283 [2024-10-07 11:31:42.679624] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.283 [2024-10-07 11:31:42.679655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.283 [2024-10-07 11:31:42.679673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.283 [2024-10-07 11:31:42.679861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.283 [2024-10-07 11:31:42.680003] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.283 [2024-10-07 11:31:42.680028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.283 [2024-10-07 11:31:42.680043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.283 [2024-10-07 11:31:42.680159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.283 [2024-10-07 11:31:42.683425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.283 [2024-10-07 11:31:42.683543] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.283 [2024-10-07 11:31:42.683574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.283 [2024-10-07 11:31:42.683591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.283 [2024-10-07 11:31:42.683623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.283 [2024-10-07 11:31:42.683656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.283 [2024-10-07 11:31:42.683674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.283 [2024-10-07 11:31:42.683688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.283 [2024-10-07 11:31:42.683718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.283 [2024-10-07 11:31:42.690766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.283 [2024-10-07 11:31:42.690879] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.283 [2024-10-07 11:31:42.690911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.283 [2024-10-07 11:31:42.690928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.283 [2024-10-07 11:31:42.690960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.283 [2024-10-07 11:31:42.690993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.283 [2024-10-07 11:31:42.691011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.283 [2024-10-07 11:31:42.691025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.283 [2024-10-07 11:31:42.691055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.283 [2024-10-07 11:31:42.693512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.283 [2024-10-07 11:31:42.693635] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.283 [2024-10-07 11:31:42.693667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.283 [2024-10-07 11:31:42.693684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.283 [2024-10-07 11:31:42.693870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.283 [2024-10-07 11:31:42.694012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.283 [2024-10-07 11:31:42.694051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.283 [2024-10-07 11:31:42.694069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.283 [2024-10-07 11:31:42.694186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.283 [2024-10-07 11:31:42.700859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.283 [2024-10-07 11:31:42.700972] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.283 [2024-10-07 11:31:42.701003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.283 [2024-10-07 11:31:42.701020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.283 [2024-10-07 11:31:42.701052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.283 [2024-10-07 11:31:42.701085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.283 [2024-10-07 11:31:42.701102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.283 [2024-10-07 11:31:42.701117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.283 [2024-10-07 11:31:42.701147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.283 [2024-10-07 11:31:42.704701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.283 [2024-10-07 11:31:42.704813] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.283 [2024-10-07 11:31:42.704844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.283 [2024-10-07 11:31:42.704861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.283 [2024-10-07 11:31:42.704894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.283 [2024-10-07 11:31:42.704926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.283 [2024-10-07 11:31:42.704943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.283 [2024-10-07 11:31:42.704963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.283 [2024-10-07 11:31:42.704992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.283 [2024-10-07 11:31:42.711293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.283 [2024-10-07 11:31:42.711429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.283 [2024-10-07 11:31:42.711462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.283 [2024-10-07 11:31:42.711479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.283 [2024-10-07 11:31:42.711529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.284 [2024-10-07 11:31:42.711563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.284 [2024-10-07 11:31:42.711581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.284 [2024-10-07 11:31:42.711596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.284 [2024-10-07 11:31:42.711626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.284 [2024-10-07 11:31:42.714793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.284 [2024-10-07 11:31:42.714904] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.284 [2024-10-07 11:31:42.714935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.284 [2024-10-07 11:31:42.714953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.284 [2024-10-07 11:31:42.714985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.284 [2024-10-07 11:31:42.715016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.284 [2024-10-07 11:31:42.715034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.284 [2024-10-07 11:31:42.715048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.284 [2024-10-07 11:31:42.715078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.284 [2024-10-07 11:31:42.721397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.284 [2024-10-07 11:31:42.721509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.284 [2024-10-07 11:31:42.721540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.284 [2024-10-07 11:31:42.721559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.284 [2024-10-07 11:31:42.721746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.284 [2024-10-07 11:31:42.721887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.284 [2024-10-07 11:31:42.721912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.284 [2024-10-07 11:31:42.721926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.284 [2024-10-07 11:31:42.722042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.284 [2024-10-07 11:31:42.725227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.284 [2024-10-07 11:31:42.725358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.284 [2024-10-07 11:31:42.725390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.284 [2024-10-07 11:31:42.725408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.284 [2024-10-07 11:31:42.725440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.284 [2024-10-07 11:31:42.725489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.284 [2024-10-07 11:31:42.725511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.284 [2024-10-07 11:31:42.725540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.284 [2024-10-07 11:31:42.725573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.284 [2024-10-07 11:31:42.732617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.284 [2024-10-07 11:31:42.732732] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.284 [2024-10-07 11:31:42.732764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.284 [2024-10-07 11:31:42.732782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.284 [2024-10-07 11:31:42.732814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.284 [2024-10-07 11:31:42.732847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.284 [2024-10-07 11:31:42.732865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.284 [2024-10-07 11:31:42.732880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.284 [2024-10-07 11:31:42.732909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.284 [2024-10-07 11:31:42.735328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.284 [2024-10-07 11:31:42.735438] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.284 [2024-10-07 11:31:42.735468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.284 [2024-10-07 11:31:42.735486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.284 [2024-10-07 11:31:42.735673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.284 [2024-10-07 11:31:42.735814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.284 [2024-10-07 11:31:42.735859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.284 [2024-10-07 11:31:42.735877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.284 [2024-10-07 11:31:42.735996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.284 [2024-10-07 11:31:42.742711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.284 [2024-10-07 11:31:42.742823] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.284 [2024-10-07 11:31:42.742854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.284 [2024-10-07 11:31:42.742871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.284 [2024-10-07 11:31:42.742903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.284 [2024-10-07 11:31:42.742935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.284 [2024-10-07 11:31:42.742952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.284 [2024-10-07 11:31:42.742967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.284 [2024-10-07 11:31:42.743013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.284 [2024-10-07 11:31:42.746561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.284 [2024-10-07 11:31:42.746674] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.284 [2024-10-07 11:31:42.746722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.284 [2024-10-07 11:31:42.746741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.284 [2024-10-07 11:31:42.746774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.284 [2024-10-07 11:31:42.746807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.284 [2024-10-07 11:31:42.746825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.284 [2024-10-07 11:31:42.746845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.284 [2024-10-07 11:31:42.746875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.284 [2024-10-07 11:31:42.753147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.284 [2024-10-07 11:31:42.753269] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.284 [2024-10-07 11:31:42.753301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.284 [2024-10-07 11:31:42.753331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.284 [2024-10-07 11:31:42.753367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.284 [2024-10-07 11:31:42.753399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.284 [2024-10-07 11:31:42.753417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.284 [2024-10-07 11:31:42.753431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.284 [2024-10-07 11:31:42.753461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.284 [2024-10-07 11:31:42.756649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.284 [2024-10-07 11:31:42.756767] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.284 [2024-10-07 11:31:42.756799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.284 [2024-10-07 11:31:42.756816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.284 [2024-10-07 11:31:42.756848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.284 [2024-10-07 11:31:42.756881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.284 [2024-10-07 11:31:42.756900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.284 [2024-10-07 11:31:42.756914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.284 [2024-10-07 11:31:42.756944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.284 [2024-10-07 11:31:42.763240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.284 [2024-10-07 11:31:42.763367] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.284 [2024-10-07 11:31:42.763399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.284 [2024-10-07 11:31:42.763417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.284 [2024-10-07 11:31:42.763450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.284 [2024-10-07 11:31:42.763501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.284 [2024-10-07 11:31:42.763521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.284 [2024-10-07 11:31:42.763535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.284 [2024-10-07 11:31:42.763565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.284 [2024-10-07 11:31:42.767336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.284 [2024-10-07 11:31:42.767457] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.284 [2024-10-07 11:31:42.767487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.284 [2024-10-07 11:31:42.767505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.284 [2024-10-07 11:31:42.767537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.284 [2024-10-07 11:31:42.767569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.284 [2024-10-07 11:31:42.767587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.284 [2024-10-07 11:31:42.767601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.284 [2024-10-07 11:31:42.767631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.284 [2024-10-07 11:31:42.774677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.284 [2024-10-07 11:31:42.774793] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.284 [2024-10-07 11:31:42.774825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.284 [2024-10-07 11:31:42.774842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.284 [2024-10-07 11:31:42.774874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.284 [2024-10-07 11:31:42.774906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.284 [2024-10-07 11:31:42.774924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.284 [2024-10-07 11:31:42.774938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.284 [2024-10-07 11:31:42.774969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.284 [2024-10-07 11:31:42.777426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.284 [2024-10-07 11:31:42.777535] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.284 [2024-10-07 11:31:42.777566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.285 [2024-10-07 11:31:42.777583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.285 [2024-10-07 11:31:42.777770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.285 [2024-10-07 11:31:42.777912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.285 [2024-10-07 11:31:42.777948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.285 [2024-10-07 11:31:42.777966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.285 [2024-10-07 11:31:42.778084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.285 [2024-10-07 11:31:42.784771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.285 [2024-10-07 11:31:42.784885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.285 [2024-10-07 11:31:42.784917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.285 [2024-10-07 11:31:42.784934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.285 [2024-10-07 11:31:42.784966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.285 [2024-10-07 11:31:42.784998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.285 [2024-10-07 11:31:42.785016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.285 [2024-10-07 11:31:42.785030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.285 [2024-10-07 11:31:42.785060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.285 [2024-10-07 11:31:42.788696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.285 [2024-10-07 11:31:42.788809] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.285 [2024-10-07 11:31:42.788841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.285 [2024-10-07 11:31:42.788858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.285 [2024-10-07 11:31:42.788890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.285 [2024-10-07 11:31:42.788922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.285 [2024-10-07 11:31:42.788940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.285 [2024-10-07 11:31:42.788954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.285 [2024-10-07 11:31:42.788984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.285 [2024-10-07 11:31:42.795307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.285 [2024-10-07 11:31:42.795444] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.285 [2024-10-07 11:31:42.795475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.285 [2024-10-07 11:31:42.795492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.285 [2024-10-07 11:31:42.795525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.285 [2024-10-07 11:31:42.795557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.285 [2024-10-07 11:31:42.795575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.285 [2024-10-07 11:31:42.795589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.285 [2024-10-07 11:31:42.795619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.285 [2024-10-07 11:31:42.798788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.285 [2024-10-07 11:31:42.798898] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.285 [2024-10-07 11:31:42.798929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.285 [2024-10-07 11:31:42.798962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.285 [2024-10-07 11:31:42.798996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.285 [2024-10-07 11:31:42.799029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.285 [2024-10-07 11:31:42.799047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.285 [2024-10-07 11:31:42.799061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.285 [2024-10-07 11:31:42.799090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.285 [2024-10-07 11:31:42.805416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.285 [2024-10-07 11:31:42.805531] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.285 [2024-10-07 11:31:42.805562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.285 [2024-10-07 11:31:42.805579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.285 [2024-10-07 11:31:42.805611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.285 [2024-10-07 11:31:42.805798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.285 [2024-10-07 11:31:42.805825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.285 [2024-10-07 11:31:42.805840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.285 [2024-10-07 11:31:42.805972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.285 [2024-10-07 11:31:42.809340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.285 [2024-10-07 11:31:42.809461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.285 [2024-10-07 11:31:42.809492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.285 [2024-10-07 11:31:42.809519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.285 [2024-10-07 11:31:42.809551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.285 [2024-10-07 11:31:42.809584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.285 [2024-10-07 11:31:42.809602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.285 [2024-10-07 11:31:42.809616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.285 [2024-10-07 11:31:42.809646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.285 [2024-10-07 11:31:42.816808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.285 [2024-10-07 11:31:42.816923] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.285 [2024-10-07 11:31:42.816955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.285 [2024-10-07 11:31:42.816973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.285 [2024-10-07 11:31:42.817005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.285 [2024-10-07 11:31:42.817037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.285 [2024-10-07 11:31:42.817062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.285 [2024-10-07 11:31:42.817086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.285 [2024-10-07 11:31:42.817118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.285 [2024-10-07 11:31:42.819430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.285 [2024-10-07 11:31:42.819540] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.285 [2024-10-07 11:31:42.819577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.285 [2024-10-07 11:31:42.819595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.285 [2024-10-07 11:31:42.819627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.285 [2024-10-07 11:31:42.819659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.285 [2024-10-07 11:31:42.819677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.285 [2024-10-07 11:31:42.819692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.285 [2024-10-07 11:31:42.819721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.285 [2024-10-07 11:31:42.826904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.285 [2024-10-07 11:31:42.827018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.285 [2024-10-07 11:31:42.827055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.285 [2024-10-07 11:31:42.827074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.285 [2024-10-07 11:31:42.827107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.285 [2024-10-07 11:31:42.827139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.285 [2024-10-07 11:31:42.827156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.285 [2024-10-07 11:31:42.827171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.285 [2024-10-07 11:31:42.827200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.285 [2024-10-07 11:31:42.830926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.285 [2024-10-07 11:31:42.831075] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.285 [2024-10-07 11:31:42.831117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.285 [2024-10-07 11:31:42.831137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.285 [2024-10-07 11:31:42.831170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.285 [2024-10-07 11:31:42.831203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.285 [2024-10-07 11:31:42.831221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.285 [2024-10-07 11:31:42.831235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.285 [2024-10-07 11:31:42.831266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.285 [2024-10-07 11:31:42.837710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.285 [2024-10-07 11:31:42.837851] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.285 [2024-10-07 11:31:42.837896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.285 [2024-10-07 11:31:42.837915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.285 [2024-10-07 11:31:42.837948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.285 [2024-10-07 11:31:42.837980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.285 [2024-10-07 11:31:42.837998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.285 [2024-10-07 11:31:42.838012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.285 [2024-10-07 11:31:42.838042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.285 [2024-10-07 11:31:42.841026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.285 [2024-10-07 11:31:42.841135] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.285 [2024-10-07 11:31:42.841168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.285 [2024-10-07 11:31:42.841185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.285 [2024-10-07 11:31:42.841217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.285 [2024-10-07 11:31:42.841249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.285 [2024-10-07 11:31:42.841267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.285 [2024-10-07 11:31:42.841281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.286 [2024-10-07 11:31:42.841311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.286 [2024-10-07 11:31:42.847817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.286 [2024-10-07 11:31:42.847931] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.286 [2024-10-07 11:31:42.847962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.286 [2024-10-07 11:31:42.847979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.286 [2024-10-07 11:31:42.848011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.286 [2024-10-07 11:31:42.848043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.286 [2024-10-07 11:31:42.848061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.286 [2024-10-07 11:31:42.848075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.286 [2024-10-07 11:31:42.848263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.286 [2024-10-07 11:31:42.851850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.286 [2024-10-07 11:31:42.851973] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.286 [2024-10-07 11:31:42.852004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.286 [2024-10-07 11:31:42.852021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.286 [2024-10-07 11:31:42.852072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.286 [2024-10-07 11:31:42.852105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.286 [2024-10-07 11:31:42.852123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.286 [2024-10-07 11:31:42.852137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.286 [2024-10-07 11:31:42.852168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.286 [2024-10-07 11:31:42.859245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.286 [2024-10-07 11:31:42.859427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.286 [2024-10-07 11:31:42.859470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.286 [2024-10-07 11:31:42.859490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.286 [2024-10-07 11:31:42.859523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.286 [2024-10-07 11:31:42.859556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.286 [2024-10-07 11:31:42.859574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.286 [2024-10-07 11:31:42.859588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.286 [2024-10-07 11:31:42.859619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.286 [2024-10-07 11:31:42.861938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.286 [2024-10-07 11:31:42.862046] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.286 [2024-10-07 11:31:42.862076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.286 [2024-10-07 11:31:42.862093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.286 [2024-10-07 11:31:42.862124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.286 [2024-10-07 11:31:42.862157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.286 [2024-10-07 11:31:42.862175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.286 [2024-10-07 11:31:42.862189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.286 [2024-10-07 11:31:42.862219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.286 [2024-10-07 11:31:42.869425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.286 [2024-10-07 11:31:42.869547] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.286 [2024-10-07 11:31:42.869579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.286 [2024-10-07 11:31:42.869596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.286 [2024-10-07 11:31:42.869628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.286 [2024-10-07 11:31:42.869660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.286 [2024-10-07 11:31:42.869678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.286 [2024-10-07 11:31:42.869708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.286 [2024-10-07 11:31:42.869742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.286 [2024-10-07 11:31:42.873549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.286 [2024-10-07 11:31:42.873664] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.286 [2024-10-07 11:31:42.873694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.286 [2024-10-07 11:31:42.873711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.286 [2024-10-07 11:31:42.873743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.286 [2024-10-07 11:31:42.873775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.286 [2024-10-07 11:31:42.873794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.286 [2024-10-07 11:31:42.873808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.286 [2024-10-07 11:31:42.873838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.286 [2024-10-07 11:31:42.880233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.286 [2024-10-07 11:31:42.880388] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.286 [2024-10-07 11:31:42.880420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.286 [2024-10-07 11:31:42.880438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.286 [2024-10-07 11:31:42.880470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.286 [2024-10-07 11:31:42.880503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.286 [2024-10-07 11:31:42.880523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.286 [2024-10-07 11:31:42.880537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.286 [2024-10-07 11:31:42.880568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.286 [2024-10-07 11:31:42.883642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.286 [2024-10-07 11:31:42.883753] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.286 [2024-10-07 11:31:42.883784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.286 [2024-10-07 11:31:42.883801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.286 [2024-10-07 11:31:42.883833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.286 [2024-10-07 11:31:42.883865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.286 [2024-10-07 11:31:42.883883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.286 [2024-10-07 11:31:42.883897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.286 [2024-10-07 11:31:42.883928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.286 [2024-10-07 11:31:42.890353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.286 [2024-10-07 11:31:42.890466] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.286 [2024-10-07 11:31:42.890513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.286 [2024-10-07 11:31:42.890532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.286 [2024-10-07 11:31:42.890565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.286 [2024-10-07 11:31:42.890598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.286 [2024-10-07 11:31:42.890616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.286 [2024-10-07 11:31:42.890630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.286 [2024-10-07 11:31:42.890816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.286 [2024-10-07 11:31:42.894347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.286 [2024-10-07 11:31:42.894470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.286 [2024-10-07 11:31:42.894502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.286 [2024-10-07 11:31:42.894520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.286 [2024-10-07 11:31:42.894552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.286 [2024-10-07 11:31:42.894584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.286 [2024-10-07 11:31:42.894603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.286 [2024-10-07 11:31:42.894617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.286 [2024-10-07 11:31:42.894647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.286 [2024-10-07 11:31:42.901732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.286 [2024-10-07 11:31:42.901848] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.286 [2024-10-07 11:31:42.901880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.286 [2024-10-07 11:31:42.901897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.286 [2024-10-07 11:31:42.901929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.286 [2024-10-07 11:31:42.901961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.286 [2024-10-07 11:31:42.901979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.286 [2024-10-07 11:31:42.901994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.286 [2024-10-07 11:31:42.902024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.286 [2024-10-07 11:31:42.904434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.286 [2024-10-07 11:31:42.904543] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.286 [2024-10-07 11:31:42.904574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.286 [2024-10-07 11:31:42.904591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.286 [2024-10-07 11:31:42.904623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.286 [2024-10-07 11:31:42.904670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.286 [2024-10-07 11:31:42.904689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.286 [2024-10-07 11:31:42.904703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.286 [2024-10-07 11:31:42.904888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.286 [2024-10-07 11:31:42.911821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.286 [2024-10-07 11:31:42.911935] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.286 [2024-10-07 11:31:42.911967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.286 [2024-10-07 11:31:42.911985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.286 [2024-10-07 11:31:42.912016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.286 [2024-10-07 11:31:42.912048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.286 [2024-10-07 11:31:42.912066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.286 [2024-10-07 11:31:42.912081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.286 [2024-10-07 11:31:42.912111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.286 [2024-10-07 11:31:42.915822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.287 [2024-10-07 11:31:42.915936] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.287 [2024-10-07 11:31:42.915967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.287 [2024-10-07 11:31:42.915984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.287 [2024-10-07 11:31:42.916016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.287 [2024-10-07 11:31:42.916048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.287 [2024-10-07 11:31:42.916066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.287 [2024-10-07 11:31:42.916080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.287 [2024-10-07 11:31:42.916110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.287 [2024-10-07 11:31:42.922407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.287 [2024-10-07 11:31:42.922529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.287 [2024-10-07 11:31:42.922561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.287 [2024-10-07 11:31:42.922579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.287 [2024-10-07 11:31:42.922611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.287 [2024-10-07 11:31:42.922643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.287 [2024-10-07 11:31:42.922661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.287 [2024-10-07 11:31:42.922677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.287 [2024-10-07 11:31:42.922707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.287 [2024-10-07 11:31:42.925913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.287 [2024-10-07 11:31:42.926023] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.287 [2024-10-07 11:31:42.926054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.287 [2024-10-07 11:31:42.926071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.287 [2024-10-07 11:31:42.926103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.287 [2024-10-07 11:31:42.926135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.287 [2024-10-07 11:31:42.926153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.287 [2024-10-07 11:31:42.926175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.287 [2024-10-07 11:31:42.926205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.287 [2024-10-07 11:31:42.932519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.287 [2024-10-07 11:31:42.932631] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.287 [2024-10-07 11:31:42.932662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.287 [2024-10-07 11:31:42.932679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.287 [2024-10-07 11:31:42.932865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.287 [2024-10-07 11:31:42.933008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.287 [2024-10-07 11:31:42.933043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.287 [2024-10-07 11:31:42.933061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.287 [2024-10-07 11:31:42.933178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.287 [2024-10-07 11:31:42.936409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.287 [2024-10-07 11:31:42.936528] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.287 [2024-10-07 11:31:42.936559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.287 [2024-10-07 11:31:42.936577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.287 [2024-10-07 11:31:42.936609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.287 [2024-10-07 11:31:42.936657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.287 [2024-10-07 11:31:42.936679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.287 [2024-10-07 11:31:42.936694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.287 [2024-10-07 11:31:42.936726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.287 [2024-10-07 11:31:42.943809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.287 [2024-10-07 11:31:42.943924] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.287 [2024-10-07 11:31:42.943955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.287 [2024-10-07 11:31:42.943986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.287 [2024-10-07 11:31:42.944020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.287 [2024-10-07 11:31:42.944052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.287 [2024-10-07 11:31:42.944070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.287 [2024-10-07 11:31:42.944084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.287 [2024-10-07 11:31:42.944115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.287 [2024-10-07 11:31:42.946495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.287 [2024-10-07 11:31:42.946606] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.287 [2024-10-07 11:31:42.946637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.287 [2024-10-07 11:31:42.946654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.287 [2024-10-07 11:31:42.946685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.287 [2024-10-07 11:31:42.946718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.287 [2024-10-07 11:31:42.946736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.287 [2024-10-07 11:31:42.946750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.287 [2024-10-07 11:31:42.946780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.287 [2024-10-07 11:31:42.953903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.287 [2024-10-07 11:31:42.954016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.287 [2024-10-07 11:31:42.954047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.287 [2024-10-07 11:31:42.954064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.287 [2024-10-07 11:31:42.954096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.287 [2024-10-07 11:31:42.954127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.287 [2024-10-07 11:31:42.954146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.287 [2024-10-07 11:31:42.954160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.287 [2024-10-07 11:31:42.954190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.287 [2024-10-07 11:31:42.957852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.287 [2024-10-07 11:31:42.957961] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.287 [2024-10-07 11:31:42.957993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.287 [2024-10-07 11:31:42.958010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.287 [2024-10-07 11:31:42.958041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.287 [2024-10-07 11:31:42.958073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.287 [2024-10-07 11:31:42.958107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.287 [2024-10-07 11:31:42.958123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.287 [2024-10-07 11:31:42.958154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.287 [2024-10-07 11:31:42.964524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.287 [2024-10-07 11:31:42.964647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.287 [2024-10-07 11:31:42.964678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.287 [2024-10-07 11:31:42.964696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.287 [2024-10-07 11:31:42.964728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.287 [2024-10-07 11:31:42.964760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.287 [2024-10-07 11:31:42.964777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.287 [2024-10-07 11:31:42.964791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.287 [2024-10-07 11:31:42.964821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.287 [2024-10-07 11:31:42.967939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.287 [2024-10-07 11:31:42.968050] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.287 [2024-10-07 11:31:42.968082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.287 [2024-10-07 11:31:42.968099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.287 [2024-10-07 11:31:42.968130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.287 [2024-10-07 11:31:42.968162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.287 [2024-10-07 11:31:42.968181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.287 [2024-10-07 11:31:42.968195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.287 [2024-10-07 11:31:42.968225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.287 [2024-10-07 11:31:42.974619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.287 [2024-10-07 11:31:42.974731] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.287 [2024-10-07 11:31:42.974762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.287 [2024-10-07 11:31:42.974779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.287 [2024-10-07 11:31:42.974812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.287 [2024-10-07 11:31:42.974844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.287 [2024-10-07 11:31:42.974862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.287 [2024-10-07 11:31:42.974876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.287 [2024-10-07 11:31:42.975060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.287 [2024-10-07 11:31:42.978571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.287 [2024-10-07 11:31:42.978708] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.287 [2024-10-07 11:31:42.978742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.287 [2024-10-07 11:31:42.978759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.288 [2024-10-07 11:31:42.978791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.288 [2024-10-07 11:31:42.978823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.288 [2024-10-07 11:31:42.978841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.288 [2024-10-07 11:31:42.978856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.288 [2024-10-07 11:31:42.978886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.288 [2024-10-07 11:31:42.985945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.288 [2024-10-07 11:31:42.986060] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.288 [2024-10-07 11:31:42.986091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.288 [2024-10-07 11:31:42.986109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.288 [2024-10-07 11:31:42.986141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.288 [2024-10-07 11:31:42.986173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.288 [2024-10-07 11:31:42.986191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.288 [2024-10-07 11:31:42.986205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.288 [2024-10-07 11:31:42.986235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.288 [2024-10-07 11:31:42.988673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.288 [2024-10-07 11:31:42.988781] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.288 [2024-10-07 11:31:42.988813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.288 [2024-10-07 11:31:42.988831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.288 [2024-10-07 11:31:42.988862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.288 [2024-10-07 11:31:42.989048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.288 [2024-10-07 11:31:42.989084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.288 [2024-10-07 11:31:42.989102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.288 [2024-10-07 11:31:42.989234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.288 [2024-10-07 11:31:42.996040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.288 [2024-10-07 11:31:42.996154] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.288 [2024-10-07 11:31:42.996187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.288 [2024-10-07 11:31:42.996205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.288 [2024-10-07 11:31:42.996253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.288 [2024-10-07 11:31:42.996286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.288 [2024-10-07 11:31:42.996304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.288 [2024-10-07 11:31:42.996332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.288 [2024-10-07 11:31:42.996367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.288 [2024-10-07 11:31:42.999951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.288 [2024-10-07 11:31:43.000062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.288 [2024-10-07 11:31:43.000092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.288 [2024-10-07 11:31:43.000110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.288 [2024-10-07 11:31:43.000142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.288 [2024-10-07 11:31:43.000174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.288 [2024-10-07 11:31:43.000192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.288 [2024-10-07 11:31:43.000206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.288 [2024-10-07 11:31:43.000236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.288 [2024-10-07 11:31:43.006556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.288 [2024-10-07 11:31:43.006677] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.288 [2024-10-07 11:31:43.006710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.288 [2024-10-07 11:31:43.006727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.288 [2024-10-07 11:31:43.006759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.288 [2024-10-07 11:31:43.006791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.288 [2024-10-07 11:31:43.006809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.288 [2024-10-07 11:31:43.006823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.288 [2024-10-07 11:31:43.006853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.288 [2024-10-07 11:31:43.010041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.288 [2024-10-07 11:31:43.010151] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.288 [2024-10-07 11:31:43.010182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.288 [2024-10-07 11:31:43.010200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.288 [2024-10-07 11:31:43.010231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.288 [2024-10-07 11:31:43.010263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.288 [2024-10-07 11:31:43.010281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.288 [2024-10-07 11:31:43.010356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.288 [2024-10-07 11:31:43.010393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.288 [2024-10-07 11:31:43.016653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.288 [2024-10-07 11:31:43.016766] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.288 [2024-10-07 11:31:43.016798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.288 [2024-10-07 11:31:43.016816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.288 [2024-10-07 11:31:43.016848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.288 [2024-10-07 11:31:43.016879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.288 [2024-10-07 11:31:43.016897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.288 [2024-10-07 11:31:43.016911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.288 [2024-10-07 11:31:43.016942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.288 [2024-10-07 11:31:43.020727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.288 [2024-10-07 11:31:43.020848] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.288 [2024-10-07 11:31:43.020879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.288 [2024-10-07 11:31:43.020896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.288 [2024-10-07 11:31:43.020928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.288 [2024-10-07 11:31:43.020961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.288 [2024-10-07 11:31:43.020979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.288 [2024-10-07 11:31:43.020994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.288 [2024-10-07 11:31:43.021024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.288 [2024-10-07 11:31:43.028051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.288 [2024-10-07 11:31:43.028219] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.288 [2024-10-07 11:31:43.028252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.288 [2024-10-07 11:31:43.028270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.288 [2024-10-07 11:31:43.028302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.288 [2024-10-07 11:31:43.028351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.288 [2024-10-07 11:31:43.028370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.288 [2024-10-07 11:31:43.028385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.288 [2024-10-07 11:31:43.028415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.288 [2024-10-07 11:31:43.030821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.288 [2024-10-07 11:31:43.030930] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.288 [2024-10-07 11:31:43.030976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.288 [2024-10-07 11:31:43.030995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.288 [2024-10-07 11:31:43.031027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.288 [2024-10-07 11:31:43.031213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.288 [2024-10-07 11:31:43.031242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.288 [2024-10-07 11:31:43.031258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.288 [2024-10-07 11:31:43.031406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.288 [2024-10-07 11:31:43.038144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.288 [2024-10-07 11:31:43.038257] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.288 [2024-10-07 11:31:43.038300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.288 [2024-10-07 11:31:43.038344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.288 [2024-10-07 11:31:43.038380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.288 [2024-10-07 11:31:43.038413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.288 [2024-10-07 11:31:43.038431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.288 [2024-10-07 11:31:43.038445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.288 [2024-10-07 11:31:43.038475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.288 [2024-10-07 11:31:43.042146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.288 [2024-10-07 11:31:43.042256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.288 [2024-10-07 11:31:43.042298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.288 [2024-10-07 11:31:43.042330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.288 [2024-10-07 11:31:43.042366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.288 [2024-10-07 11:31:43.042399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.288 [2024-10-07 11:31:43.042417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.288 [2024-10-07 11:31:43.042431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.288 [2024-10-07 11:31:43.042461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.288 [2024-10-07 11:31:43.048725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.288 [2024-10-07 11:31:43.048846] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.288 [2024-10-07 11:31:43.048877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.288 [2024-10-07 11:31:43.048894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.288 [2024-10-07 11:31:43.048926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.288 [2024-10-07 11:31:43.048993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.288 [2024-10-07 11:31:43.049015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.288 [2024-10-07 11:31:43.049030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.288 [2024-10-07 11:31:43.049059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.288 [2024-10-07 11:31:43.052232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.288 [2024-10-07 11:31:43.052356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.288 [2024-10-07 11:31:43.052389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.288 [2024-10-07 11:31:43.052406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.289 [2024-10-07 11:31:43.052439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.289 [2024-10-07 11:31:43.052471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.289 [2024-10-07 11:31:43.052490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.289 [2024-10-07 11:31:43.052504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.289 [2024-10-07 11:31:43.052534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.289 [2024-10-07 11:31:43.058814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.289 [2024-10-07 11:31:43.058926] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.289 [2024-10-07 11:31:43.058958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.289 [2024-10-07 11:31:43.058975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.289 [2024-10-07 11:31:43.059008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.289 [2024-10-07 11:31:43.059194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.289 [2024-10-07 11:31:43.059221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.289 [2024-10-07 11:31:43.059235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.289 [2024-10-07 11:31:43.059387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.289 [2024-10-07 11:31:43.062753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.289 [2024-10-07 11:31:43.062873] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.289 [2024-10-07 11:31:43.062905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.289 [2024-10-07 11:31:43.062922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.289 [2024-10-07 11:31:43.062955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.289 [2024-10-07 11:31:43.062987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.289 [2024-10-07 11:31:43.063005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.289 [2024-10-07 11:31:43.063019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.289 [2024-10-07 11:31:43.063049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.289 [2024-10-07 11:31:43.070097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.289 [2024-10-07 11:31:43.070217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.289 [2024-10-07 11:31:43.070249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.289 [2024-10-07 11:31:43.070266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.289 [2024-10-07 11:31:43.070311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.289 [2024-10-07 11:31:43.070365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.289 [2024-10-07 11:31:43.070383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.289 [2024-10-07 11:31:43.070397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.289 [2024-10-07 11:31:43.070428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.289 [2024-10-07 11:31:43.072844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.289 [2024-10-07 11:31:43.072952] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.289 [2024-10-07 11:31:43.072982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.289 [2024-10-07 11:31:43.073000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.289 [2024-10-07 11:31:43.073031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.289 [2024-10-07 11:31:43.073218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.289 [2024-10-07 11:31:43.073247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.289 [2024-10-07 11:31:43.073262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.289 [2024-10-07 11:31:43.073407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.289 [2024-10-07 11:31:43.080190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.289 [2024-10-07 11:31:43.080303] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.289 [2024-10-07 11:31:43.080349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.289 [2024-10-07 11:31:43.080367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.289 [2024-10-07 11:31:43.080399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.289 [2024-10-07 11:31:43.080432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.289 [2024-10-07 11:31:43.080450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.289 [2024-10-07 11:31:43.080464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.289 [2024-10-07 11:31:43.080494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.289 [2024-10-07 11:31:43.084142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.289 [2024-10-07 11:31:43.084254] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.289 [2024-10-07 11:31:43.084284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.289 [2024-10-07 11:31:43.084341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.289 [2024-10-07 11:31:43.084376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.289 [2024-10-07 11:31:43.084408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.289 [2024-10-07 11:31:43.084427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.289 [2024-10-07 11:31:43.084441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.289 [2024-10-07 11:31:43.084472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.289 [2024-10-07 11:31:43.090785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.289 [2024-10-07 11:31:43.090910] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.289 [2024-10-07 11:31:43.090941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.289 [2024-10-07 11:31:43.090959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.289 [2024-10-07 11:31:43.090991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.289 [2024-10-07 11:31:43.091023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.289 [2024-10-07 11:31:43.091041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.289 [2024-10-07 11:31:43.091055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.289 [2024-10-07 11:31:43.091086] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.289 [2024-10-07 11:31:43.094234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.289 [2024-10-07 11:31:43.094374] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.289 [2024-10-07 11:31:43.094419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.289 [2024-10-07 11:31:43.094436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.289 [2024-10-07 11:31:43.094469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.289 [2024-10-07 11:31:43.094501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.289 [2024-10-07 11:31:43.094520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.289 [2024-10-07 11:31:43.094534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.289 [2024-10-07 11:31:43.094565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.289 [2024-10-07 11:31:43.100883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.289 [2024-10-07 11:31:43.101010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.289 [2024-10-07 11:31:43.101041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.289 [2024-10-07 11:31:43.101059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.289 [2024-10-07 11:31:43.101090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.289 [2024-10-07 11:31:43.101122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.289 [2024-10-07 11:31:43.101157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.289 [2024-10-07 11:31:43.101172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.289 [2024-10-07 11:31:43.101381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.289 [2024-10-07 11:31:43.104853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.289 [2024-10-07 11:31:43.104974] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.289 [2024-10-07 11:31:43.105005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.289 [2024-10-07 11:31:43.105022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.289 [2024-10-07 11:31:43.105070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.289 [2024-10-07 11:31:43.105106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.289 [2024-10-07 11:31:43.105124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.289 [2024-10-07 11:31:43.105138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.289 [2024-10-07 11:31:43.105168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.289 [2024-10-07 11:31:43.112250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.289 [2024-10-07 11:31:43.112414] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.289 [2024-10-07 11:31:43.112447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.289 [2024-10-07 11:31:43.112465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.289 [2024-10-07 11:31:43.112497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.289 [2024-10-07 11:31:43.112530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.289 [2024-10-07 11:31:43.112548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.289 [2024-10-07 11:31:43.112562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.289 [2024-10-07 11:31:43.112593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.289 [2024-10-07 11:31:43.114943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.289 [2024-10-07 11:31:43.115054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.289 [2024-10-07 11:31:43.115085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.289 [2024-10-07 11:31:43.115102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.289 [2024-10-07 11:31:43.115133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.289 [2024-10-07 11:31:43.115165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.289 [2024-10-07 11:31:43.115184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.289 [2024-10-07 11:31:43.115198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.289 [2024-10-07 11:31:43.115228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.289 [2024-10-07 11:31:43.122360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.289 [2024-10-07 11:31:43.122490] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.289 [2024-10-07 11:31:43.122522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.289 [2024-10-07 11:31:43.122540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.289 [2024-10-07 11:31:43.122572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.289 [2024-10-07 11:31:43.122604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.289 [2024-10-07 11:31:43.122621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.289 [2024-10-07 11:31:43.122636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.289 [2024-10-07 11:31:43.122666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.289 [2024-10-07 11:31:43.126411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.289 [2024-10-07 11:31:43.126525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.289 [2024-10-07 11:31:43.126555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.289 [2024-10-07 11:31:43.126573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.289 [2024-10-07 11:31:43.126604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.289 [2024-10-07 11:31:43.126636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.289 [2024-10-07 11:31:43.126655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.289 [2024-10-07 11:31:43.126669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.289 [2024-10-07 11:31:43.126698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.290 [2024-10-07 11:31:43.133034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.290 [2024-10-07 11:31:43.133158] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.290 [2024-10-07 11:31:43.133189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.290 [2024-10-07 11:31:43.133207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.290 [2024-10-07 11:31:43.133244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.290 [2024-10-07 11:31:43.133277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.290 [2024-10-07 11:31:43.133295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.290 [2024-10-07 11:31:43.133309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.290 [2024-10-07 11:31:43.133356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.290 [2024-10-07 11:31:43.136500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.290 [2024-10-07 11:31:43.136612] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.290 [2024-10-07 11:31:43.136642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.290 [2024-10-07 11:31:43.136660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.290 [2024-10-07 11:31:43.136709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.290 [2024-10-07 11:31:43.136743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.290 [2024-10-07 11:31:43.136761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.290 [2024-10-07 11:31:43.136775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.290 [2024-10-07 11:31:43.136805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.290 [2024-10-07 11:31:43.143124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.290 [2024-10-07 11:31:43.143237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.290 [2024-10-07 11:31:43.143268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.290 [2024-10-07 11:31:43.143285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.290 [2024-10-07 11:31:43.143331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.290 [2024-10-07 11:31:43.143368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.290 [2024-10-07 11:31:43.143386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.290 [2024-10-07 11:31:43.143400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.290 [2024-10-07 11:31:43.143584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.290 [2024-10-07 11:31:43.147084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.290 [2024-10-07 11:31:43.147204] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.290 [2024-10-07 11:31:43.147234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.290 [2024-10-07 11:31:43.147252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.290 [2024-10-07 11:31:43.147283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.290 [2024-10-07 11:31:43.147329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.290 [2024-10-07 11:31:43.147350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.290 [2024-10-07 11:31:43.147364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.290 [2024-10-07 11:31:43.147395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.290 [2024-10-07 11:31:43.154484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.290 [2024-10-07 11:31:43.154599] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.290 [2024-10-07 11:31:43.154630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.290 [2024-10-07 11:31:43.154647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.290 [2024-10-07 11:31:43.154680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.290 [2024-10-07 11:31:43.154712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.290 [2024-10-07 11:31:43.154730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.290 [2024-10-07 11:31:43.154759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.290 [2024-10-07 11:31:43.154791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.290 [2024-10-07 11:31:43.157173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.290 [2024-10-07 11:31:43.157282] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.290 [2024-10-07 11:31:43.157313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.290 [2024-10-07 11:31:43.157344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.290 [2024-10-07 11:31:43.157377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.290 [2024-10-07 11:31:43.157410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.290 [2024-10-07 11:31:43.157428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.290 [2024-10-07 11:31:43.157442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.290 [2024-10-07 11:31:43.157626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.290 [2024-10-07 11:31:43.164580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.290 [2024-10-07 11:31:43.164693] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.290 [2024-10-07 11:31:43.164725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.290 [2024-10-07 11:31:43.164742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.290 [2024-10-07 11:31:43.164774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.290 [2024-10-07 11:31:43.164806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.290 [2024-10-07 11:31:43.164824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.290 [2024-10-07 11:31:43.164838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.290 [2024-10-07 11:31:43.164868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.290 [2024-10-07 11:31:43.168537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.290 [2024-10-07 11:31:43.168650] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.290 [2024-10-07 11:31:43.168681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.290 [2024-10-07 11:31:43.168698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.290 [2024-10-07 11:31:43.168729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.290 [2024-10-07 11:31:43.168762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.290 [2024-10-07 11:31:43.168780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.290 [2024-10-07 11:31:43.168795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.290 [2024-10-07 11:31:43.168825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.290 [2024-10-07 11:31:43.175102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.290 [2024-10-07 11:31:43.175223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.290 [2024-10-07 11:31:43.175270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.290 [2024-10-07 11:31:43.175289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.290 [2024-10-07 11:31:43.175337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.290 [2024-10-07 11:31:43.175374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.290 [2024-10-07 11:31:43.175392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.290 [2024-10-07 11:31:43.175406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.290 [2024-10-07 11:31:43.175436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.290 [2024-10-07 11:31:43.178626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.290 [2024-10-07 11:31:43.178738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.290 [2024-10-07 11:31:43.178769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.290 [2024-10-07 11:31:43.178787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.290 [2024-10-07 11:31:43.178818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.290 [2024-10-07 11:31:43.178850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.290 [2024-10-07 11:31:43.178868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.290 [2024-10-07 11:31:43.178883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.290 [2024-10-07 11:31:43.178930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.290 [2024-10-07 11:31:43.185192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.290 [2024-10-07 11:31:43.185306] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.290 [2024-10-07 11:31:43.185350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.290 [2024-10-07 11:31:43.185369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.290 [2024-10-07 11:31:43.185555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.290 [2024-10-07 11:31:43.185697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.290 [2024-10-07 11:31:43.185732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.290 [2024-10-07 11:31:43.185750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.290 [2024-10-07 11:31:43.185868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.290 [2024-10-07 11:31:43.189040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.290 [2024-10-07 11:31:43.189159] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.290 [2024-10-07 11:31:43.189189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.290 [2024-10-07 11:31:43.189207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.290 [2024-10-07 11:31:43.189239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.290 [2024-10-07 11:31:43.189286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.290 [2024-10-07 11:31:43.189305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.290 [2024-10-07 11:31:43.189335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.290 [2024-10-07 11:31:43.189368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.290 [2024-10-07 11:31:43.196468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.290 [2024-10-07 11:31:43.196585] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.290 [2024-10-07 11:31:43.196617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.290 [2024-10-07 11:31:43.196634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.290 [2024-10-07 11:31:43.196667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.290 [2024-10-07 11:31:43.196699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.290 [2024-10-07 11:31:43.196717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.290 [2024-10-07 11:31:43.196731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.290 [2024-10-07 11:31:43.196762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.291 [2024-10-07 11:31:43.199127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.291 [2024-10-07 11:31:43.199238] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.291 [2024-10-07 11:31:43.199268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.291 [2024-10-07 11:31:43.199285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.291 [2024-10-07 11:31:43.199330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.291 [2024-10-07 11:31:43.199366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.291 [2024-10-07 11:31:43.199384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.291 [2024-10-07 11:31:43.199399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.291 [2024-10-07 11:31:43.199428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.291 [2024-10-07 11:31:43.207530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.291 [2024-10-07 11:31:43.207645] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.291 [2024-10-07 11:31:43.207676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.291 [2024-10-07 11:31:43.207694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.291 [2024-10-07 11:31:43.207726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.291 [2024-10-07 11:31:43.207758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.291 [2024-10-07 11:31:43.207776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.291 [2024-10-07 11:31:43.207790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.291 [2024-10-07 11:31:43.207839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.291 [2024-10-07 11:31:43.210492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.291 [2024-10-07 11:31:43.210603] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.291 [2024-10-07 11:31:43.210634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.291 [2024-10-07 11:31:43.210652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.291 [2024-10-07 11:31:43.210683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.291 [2024-10-07 11:31:43.211503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.291 [2024-10-07 11:31:43.211540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.291 [2024-10-07 11:31:43.211558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.291 [2024-10-07 11:31:43.212438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.291 [2024-10-07 11:31:43.217620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.291 [2024-10-07 11:31:43.217738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.291 [2024-10-07 11:31:43.217769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.291 [2024-10-07 11:31:43.217787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.291 [2024-10-07 11:31:43.217818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.291 [2024-10-07 11:31:43.217850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.291 [2024-10-07 11:31:43.217868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.291 [2024-10-07 11:31:43.217882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.291 [2024-10-07 11:31:43.217912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.291 [2024-10-07 11:31:43.220584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.291 [2024-10-07 11:31:43.220693] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.291 [2024-10-07 11:31:43.220724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.291 [2024-10-07 11:31:43.220742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.291 [2024-10-07 11:31:43.220773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.291 [2024-10-07 11:31:43.220805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.291 [2024-10-07 11:31:43.220822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.291 [2024-10-07 11:31:43.220837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.291 [2024-10-07 11:31:43.220867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.291 [2024-10-07 11:31:43.227929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.291 [2024-10-07 11:31:43.228043] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.291 [2024-10-07 11:31:43.228074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.291 [2024-10-07 11:31:43.228106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.291 [2024-10-07 11:31:43.228140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.291 [2024-10-07 11:31:43.228173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.291 [2024-10-07 11:31:43.228190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.291 [2024-10-07 11:31:43.228205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.291 [2024-10-07 11:31:43.228235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.291 [2024-10-07 11:31:43.230673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.291 [2024-10-07 11:31:43.230784] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.291 [2024-10-07 11:31:43.230815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.291 [2024-10-07 11:31:43.230832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.291 [2024-10-07 11:31:43.230863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.291 [2024-10-07 11:31:43.231050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.291 [2024-10-07 11:31:43.231076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.291 [2024-10-07 11:31:43.231091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.291 [2024-10-07 11:31:43.231221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.291 [2024-10-07 11:31:43.238019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.291 [2024-10-07 11:31:43.238133] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.291 [2024-10-07 11:31:43.238165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.291 [2024-10-07 11:31:43.238182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.291 [2024-10-07 11:31:43.238214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.291 [2024-10-07 11:31:43.238246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.291 [2024-10-07 11:31:43.238263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.291 [2024-10-07 11:31:43.238277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.291 [2024-10-07 11:31:43.238337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.291 [2024-10-07 11:31:43.241986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.291 [2024-10-07 11:31:43.242097] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.291 [2024-10-07 11:31:43.242127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.291 [2024-10-07 11:31:43.242144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.291 [2024-10-07 11:31:43.242176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.291 [2024-10-07 11:31:43.242208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.291 [2024-10-07 11:31:43.242242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.291 [2024-10-07 11:31:43.242258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.291 [2024-10-07 11:31:43.242302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.291 [2024-10-07 11:31:43.248577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.291 [2024-10-07 11:31:43.248697] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.291 [2024-10-07 11:31:43.248728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.291 [2024-10-07 11:31:43.248746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.291 [2024-10-07 11:31:43.248778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.291 [2024-10-07 11:31:43.248826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.291 [2024-10-07 11:31:43.248848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.291 [2024-10-07 11:31:43.248863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.291 [2024-10-07 11:31:43.248893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.291 [2024-10-07 11:31:43.252076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.291 [2024-10-07 11:31:43.252188] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.291 [2024-10-07 11:31:43.252220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.291 [2024-10-07 11:31:43.252237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.291 [2024-10-07 11:31:43.252269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.291 [2024-10-07 11:31:43.252301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.291 [2024-10-07 11:31:43.252334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.291 [2024-10-07 11:31:43.252350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.291 [2024-10-07 11:31:43.252382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.291 [2024-10-07 11:31:43.258669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.291 [2024-10-07 11:31:43.258782] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.291 [2024-10-07 11:31:43.258813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.291 [2024-10-07 11:31:43.258830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.291 [2024-10-07 11:31:43.258862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.291 [2024-10-07 11:31:43.258894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.291 [2024-10-07 11:31:43.258912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.291 [2024-10-07 11:31:43.258926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.291 [2024-10-07 11:31:43.259110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.291 [2024-10-07 11:31:43.262660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.291 [2024-10-07 11:31:43.262797] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.291 [2024-10-07 11:31:43.262828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.291 [2024-10-07 11:31:43.262846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.291 [2024-10-07 11:31:43.262878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.291 [2024-10-07 11:31:43.262920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.291 [2024-10-07 11:31:43.262938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.291 [2024-10-07 11:31:43.262953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.291 [2024-10-07 11:31:43.262984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.291 [2024-10-07 11:31:43.270067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.291 [2024-10-07 11:31:43.270232] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.291 [2024-10-07 11:31:43.270277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.291 [2024-10-07 11:31:43.270339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.291 [2024-10-07 11:31:43.270394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.291 [2024-10-07 11:31:43.270444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.291 [2024-10-07 11:31:43.270472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.291 [2024-10-07 11:31:43.270494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.291 [2024-10-07 11:31:43.270562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.291 [2024-10-07 11:31:43.272786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.291 [2024-10-07 11:31:43.272946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.291 [2024-10-07 11:31:43.272988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.291 [2024-10-07 11:31:43.273014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.291 [2024-10-07 11:31:43.273287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.291 [2024-10-07 11:31:43.273510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.291 [2024-10-07 11:31:43.273570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.291 [2024-10-07 11:31:43.273596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.291 [2024-10-07 11:31:43.273764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.291 [2024-10-07 11:31:43.280190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.291 [2024-10-07 11:31:43.280368] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.291 [2024-10-07 11:31:43.280413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.292 [2024-10-07 11:31:43.280443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.292 [2024-10-07 11:31:43.280536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.292 [2024-10-07 11:31:43.280595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.292 [2024-10-07 11:31:43.280630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.292 [2024-10-07 11:31:43.280657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.292 [2024-10-07 11:31:43.282134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.292 [2024-10-07 11:31:43.285813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.292 [2024-10-07 11:31:43.286861] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.292 [2024-10-07 11:31:43.286922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.292 [2024-10-07 11:31:43.286953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.292 [2024-10-07 11:31:43.287086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.292 [2024-10-07 11:31:43.287375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.292 [2024-10-07 11:31:43.287434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.292 [2024-10-07 11:31:43.287468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.292 [2024-10-07 11:31:43.287657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.292 [2024-10-07 11:31:43.291067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.292 [2024-10-07 11:31:43.291263] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.292 [2024-10-07 11:31:43.291333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.292 [2024-10-07 11:31:43.291367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.292 [2024-10-07 11:31:43.291407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.292 [2024-10-07 11:31:43.292627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.292 [2024-10-07 11:31:43.292667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.292 [2024-10-07 11:31:43.292685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.292 [2024-10-07 11:31:43.292903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.292 [2024-10-07 11:31:43.297008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.292 [2024-10-07 11:31:43.298347] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.292 [2024-10-07 11:31:43.298395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.292 [2024-10-07 11:31:43.298416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.292 [2024-10-07 11:31:43.298569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.292 [2024-10-07 11:31:43.298612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.292 [2024-10-07 11:31:43.298631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.292 [2024-10-07 11:31:43.298661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.292 [2024-10-07 11:31:43.298696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.292 [2024-10-07 11:31:43.301191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.292 [2024-10-07 11:31:43.301301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.292 [2024-10-07 11:31:43.301347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.292 [2024-10-07 11:31:43.301366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.292 [2024-10-07 11:31:43.301553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.292 [2024-10-07 11:31:43.301696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.292 [2024-10-07 11:31:43.301731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.292 [2024-10-07 11:31:43.301748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.292 [2024-10-07 11:31:43.301877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.292 [2024-10-07 11:31:43.307112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.292 [2024-10-07 11:31:43.307226] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.292 [2024-10-07 11:31:43.307258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.292 [2024-10-07 11:31:43.307276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.292 [2024-10-07 11:31:43.307308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.292 [2024-10-07 11:31:43.308208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.292 [2024-10-07 11:31:43.308246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.292 [2024-10-07 11:31:43.308263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.292 [2024-10-07 11:31:43.308464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.292 [2024-10-07 11:31:43.312577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.292 [2024-10-07 11:31:43.312694] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.292 [2024-10-07 11:31:43.312726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.292 [2024-10-07 11:31:43.312743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.292 [2024-10-07 11:31:43.312776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.292 [2024-10-07 11:31:43.312808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.292 [2024-10-07 11:31:43.312826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.292 [2024-10-07 11:31:43.312841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.292 [2024-10-07 11:31:43.312872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.292 [2024-10-07 11:31:43.318041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.292 [2024-10-07 11:31:43.318156] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.292 [2024-10-07 11:31:43.318205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.292 [2024-10-07 11:31:43.318225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.292 [2024-10-07 11:31:43.318258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.292 [2024-10-07 11:31:43.318309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.292 [2024-10-07 11:31:43.318352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.292 [2024-10-07 11:31:43.318367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.292 [2024-10-07 11:31:43.318400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.292 [2024-10-07 11:31:43.322672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.292 [2024-10-07 11:31:43.322787] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.292 [2024-10-07 11:31:43.322819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.292 [2024-10-07 11:31:43.322837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.292 [2024-10-07 11:31:43.322870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.292 [2024-10-07 11:31:43.322902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.292 [2024-10-07 11:31:43.322919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.292 [2024-10-07 11:31:43.322934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.292 [2024-10-07 11:31:43.322964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.292 [2024-10-07 11:31:43.328959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.292 [2024-10-07 11:31:43.329073] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.292 [2024-10-07 11:31:43.329105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.292 [2024-10-07 11:31:43.329123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.292 [2024-10-07 11:31:43.329155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.292 [2024-10-07 11:31:43.329188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.292 [2024-10-07 11:31:43.329206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.292 [2024-10-07 11:31:43.329221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.292 [2024-10-07 11:31:43.329250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.292 [2024-10-07 11:31:43.333472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.292 [2024-10-07 11:31:43.333593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.292 [2024-10-07 11:31:43.333625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.292 [2024-10-07 11:31:43.333642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.292 [2024-10-07 11:31:43.333675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.292 [2024-10-07 11:31:43.333726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.292 [2024-10-07 11:31:43.333746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.292 [2024-10-07 11:31:43.333760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.292 [2024-10-07 11:31:43.333791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.292 [2024-10-07 11:31:43.340883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.292 [2024-10-07 11:31:43.341053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.292 [2024-10-07 11:31:43.341087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.292 [2024-10-07 11:31:43.341105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.292 [2024-10-07 11:31:43.341138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.292 [2024-10-07 11:31:43.341170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.292 [2024-10-07 11:31:43.341189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.292 [2024-10-07 11:31:43.341204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.292 [2024-10-07 11:31:43.341235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.292 [2024-10-07 11:31:43.343564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.292 [2024-10-07 11:31:43.343676] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.292 [2024-10-07 11:31:43.343707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.292 [2024-10-07 11:31:43.343724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.292 [2024-10-07 11:31:43.343756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.292 [2024-10-07 11:31:43.343788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.292 [2024-10-07 11:31:43.343807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.292 [2024-10-07 11:31:43.343821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.292 [2024-10-07 11:31:43.343850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.292 [2024-10-07 11:31:43.351005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.292 [2024-10-07 11:31:43.351118] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.292 [2024-10-07 11:31:43.351150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.292 [2024-10-07 11:31:43.351167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.292 [2024-10-07 11:31:43.351200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.292 [2024-10-07 11:31:43.351232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.292 [2024-10-07 11:31:43.351251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.292 [2024-10-07 11:31:43.351265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.292 [2024-10-07 11:31:43.351329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.292 [2024-10-07 11:31:43.355141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.292 [2024-10-07 11:31:43.355257] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.292 [2024-10-07 11:31:43.355293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.292 [2024-10-07 11:31:43.355310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.292 [2024-10-07 11:31:43.355359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.292 [2024-10-07 11:31:43.355392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.292 [2024-10-07 11:31:43.355410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.292 [2024-10-07 11:31:43.355424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.292 [2024-10-07 11:31:43.355454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.292 [2024-10-07 11:31:43.361819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.292 [2024-10-07 11:31:43.361959] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.292 [2024-10-07 11:31:43.361991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.292 [2024-10-07 11:31:43.362009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.292 [2024-10-07 11:31:43.362041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.292 [2024-10-07 11:31:43.362073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.292 [2024-10-07 11:31:43.362091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.293 [2024-10-07 11:31:43.362106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.293 [2024-10-07 11:31:43.362136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.293 [2024-10-07 11:31:43.365231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.293 [2024-10-07 11:31:43.365356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.293 [2024-10-07 11:31:43.365389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.293 [2024-10-07 11:31:43.365407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.293 [2024-10-07 11:31:43.365439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.293 [2024-10-07 11:31:43.365472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.293 [2024-10-07 11:31:43.365490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.293 [2024-10-07 11:31:43.365504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.293 [2024-10-07 11:31:43.365534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.293 [2024-10-07 11:31:43.371909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.293 [2024-10-07 11:31:43.372023] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.293 [2024-10-07 11:31:43.372056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.293 [2024-10-07 11:31:43.372090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.293 [2024-10-07 11:31:43.372124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.293 [2024-10-07 11:31:43.372157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.293 [2024-10-07 11:31:43.372176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.293 [2024-10-07 11:31:43.372190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.293 [2024-10-07 11:31:43.372220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.293 [2024-10-07 11:31:43.376037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.293 [2024-10-07 11:31:43.376160] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.293 [2024-10-07 11:31:43.376192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.293 [2024-10-07 11:31:43.376210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.293 [2024-10-07 11:31:43.376242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.293 [2024-10-07 11:31:43.376275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.293 [2024-10-07 11:31:43.376294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.293 [2024-10-07 11:31:43.376308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.293 [2024-10-07 11:31:43.376355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.293 [2024-10-07 11:31:43.383415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.293 [2024-10-07 11:31:43.383568] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.293 [2024-10-07 11:31:43.383601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.293 [2024-10-07 11:31:43.383619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.293 [2024-10-07 11:31:43.383652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.293 [2024-10-07 11:31:43.383685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.293 [2024-10-07 11:31:43.383710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.293 [2024-10-07 11:31:43.383724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.293 [2024-10-07 11:31:43.383755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.293 [2024-10-07 11:31:43.386126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.293 [2024-10-07 11:31:43.386235] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.293 [2024-10-07 11:31:43.386266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.293 [2024-10-07 11:31:43.386283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.293 [2024-10-07 11:31:43.386344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.293 [2024-10-07 11:31:43.386381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.293 [2024-10-07 11:31:43.386416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.293 [2024-10-07 11:31:43.386438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.293 [2024-10-07 11:31:43.386624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.293 [2024-10-07 11:31:43.393510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.293 [2024-10-07 11:31:43.393624] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.293 [2024-10-07 11:31:43.393656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.293 [2024-10-07 11:31:43.393674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.293 [2024-10-07 11:31:43.393705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.293 [2024-10-07 11:31:43.393737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.293 [2024-10-07 11:31:43.393756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.293 [2024-10-07 11:31:43.393770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.293 [2024-10-07 11:31:43.393800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.293 [2024-10-07 11:31:43.397530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.293 [2024-10-07 11:31:43.397645] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.293 [2024-10-07 11:31:43.397676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.293 [2024-10-07 11:31:43.397693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.293 [2024-10-07 11:31:43.397725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.293 [2024-10-07 11:31:43.397758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.293 [2024-10-07 11:31:43.397776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.293 [2024-10-07 11:31:43.397790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.293 [2024-10-07 11:31:43.397820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.293 [2024-10-07 11:31:43.404165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.293 [2024-10-07 11:31:43.404286] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.293 [2024-10-07 11:31:43.404330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.293 [2024-10-07 11:31:43.404351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.293 [2024-10-07 11:31:43.404384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.293 [2024-10-07 11:31:43.404417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.293 [2024-10-07 11:31:43.404435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.293 [2024-10-07 11:31:43.404450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.293 [2024-10-07 11:31:43.404480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.293 [2024-10-07 11:31:43.407626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.293 [2024-10-07 11:31:43.407756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.293 [2024-10-07 11:31:43.407788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.293 [2024-10-07 11:31:43.407806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.293 [2024-10-07 11:31:43.407838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.293 [2024-10-07 11:31:43.407869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.293 [2024-10-07 11:31:43.407888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.293 [2024-10-07 11:31:43.407902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.293 [2024-10-07 11:31:43.407932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.293 [2024-10-07 11:31:43.414260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.293 [2024-10-07 11:31:43.414417] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.293 [2024-10-07 11:31:43.414460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.293 [2024-10-07 11:31:43.414478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.293 [2024-10-07 11:31:43.414510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.293 [2024-10-07 11:31:43.414543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.293 [2024-10-07 11:31:43.414561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.293 [2024-10-07 11:31:43.414575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.293 [2024-10-07 11:31:43.414761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.293 [2024-10-07 11:31:43.418296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.293 [2024-10-07 11:31:43.418455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.293 [2024-10-07 11:31:43.418487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.293 [2024-10-07 11:31:43.418505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.293 [2024-10-07 11:31:43.418538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.293 [2024-10-07 11:31:43.418571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.293 [2024-10-07 11:31:43.418589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.293 [2024-10-07 11:31:43.418603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.293 [2024-10-07 11:31:43.418634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.293 [2024-10-07 11:31:43.425689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.293 [2024-10-07 11:31:43.425804] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.293 [2024-10-07 11:31:43.425836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.293 [2024-10-07 11:31:43.425854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.293 [2024-10-07 11:31:43.425906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.293 [2024-10-07 11:31:43.425942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.293 [2024-10-07 11:31:43.425960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.293 [2024-10-07 11:31:43.425974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.293 [2024-10-07 11:31:43.426005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.293 [2024-10-07 11:31:43.428398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.293 [2024-10-07 11:31:43.428507] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.293 [2024-10-07 11:31:43.428538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.293 [2024-10-07 11:31:43.428556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.293 [2024-10-07 11:31:43.428587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.293 [2024-10-07 11:31:43.428620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.293 [2024-10-07 11:31:43.428637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.293 [2024-10-07 11:31:43.428652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.293 [2024-10-07 11:31:43.428836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.293 [2024-10-07 11:31:43.435785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.293 [2024-10-07 11:31:43.435898] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.293 [2024-10-07 11:31:43.435930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.293 [2024-10-07 11:31:43.435947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.293 [2024-10-07 11:31:43.435980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.293 [2024-10-07 11:31:43.436012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.293 [2024-10-07 11:31:43.436031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.293 [2024-10-07 11:31:43.436045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.294 [2024-10-07 11:31:43.436075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.294 [2024-10-07 11:31:43.439798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.294 [2024-10-07 11:31:43.439950] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.294 [2024-10-07 11:31:43.439982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.294 [2024-10-07 11:31:43.439999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.294 [2024-10-07 11:31:43.440032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.294 [2024-10-07 11:31:43.440064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.294 [2024-10-07 11:31:43.440082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.294 [2024-10-07 11:31:43.440115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.294 [2024-10-07 11:31:43.440147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.294 [2024-10-07 11:31:43.446572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.294 [2024-10-07 11:31:43.446694] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.294 [2024-10-07 11:31:43.446726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.294 [2024-10-07 11:31:43.446743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.294 [2024-10-07 11:31:43.446775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.294 [2024-10-07 11:31:43.446807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.294 [2024-10-07 11:31:43.446826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.294 [2024-10-07 11:31:43.446840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.294 [2024-10-07 11:31:43.446870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.294 [2024-10-07 11:31:43.449918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.294 [2024-10-07 11:31:43.450027] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.294 [2024-10-07 11:31:43.450057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.294 [2024-10-07 11:31:43.450075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.294 [2024-10-07 11:31:43.450106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.294 [2024-10-07 11:31:43.450139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.294 [2024-10-07 11:31:43.450157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.294 [2024-10-07 11:31:43.450172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.294 [2024-10-07 11:31:43.450201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.294 [2024-10-07 11:31:43.456662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.294 [2024-10-07 11:31:43.456774] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.294 [2024-10-07 11:31:43.456805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.294 [2024-10-07 11:31:43.456823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.294 [2024-10-07 11:31:43.456855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.294 [2024-10-07 11:31:43.456887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.294 [2024-10-07 11:31:43.456905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.294 [2024-10-07 11:31:43.456919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.294 [2024-10-07 11:31:43.456949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.294 [2024-10-07 11:31:43.460735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.294 [2024-10-07 11:31:43.460855] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.294 [2024-10-07 11:31:43.460903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.294 [2024-10-07 11:31:43.460923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.294 [2024-10-07 11:31:43.460956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.294 [2024-10-07 11:31:43.460988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.294 [2024-10-07 11:31:43.461007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.294 [2024-10-07 11:31:43.461021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.294 [2024-10-07 11:31:43.461051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.294 [2024-10-07 11:31:43.468169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.294 [2024-10-07 11:31:43.468285] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.294 [2024-10-07 11:31:43.468329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.294 [2024-10-07 11:31:43.468350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.294 [2024-10-07 11:31:43.468383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.294 [2024-10-07 11:31:43.468416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.294 [2024-10-07 11:31:43.468435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.294 [2024-10-07 11:31:43.468449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.294 [2024-10-07 11:31:43.468480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.294 [2024-10-07 11:31:43.470824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.294 [2024-10-07 11:31:43.470936] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.294 [2024-10-07 11:31:43.470967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.294 [2024-10-07 11:31:43.470984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.294 [2024-10-07 11:31:43.471030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.294 [2024-10-07 11:31:43.471066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.294 [2024-10-07 11:31:43.471084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.294 [2024-10-07 11:31:43.471098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.294 [2024-10-07 11:31:43.471128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.294 [2024-10-07 11:31:43.478378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.294 [2024-10-07 11:31:43.478490] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.294 [2024-10-07 11:31:43.478522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.294 [2024-10-07 11:31:43.478540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.294 [2024-10-07 11:31:43.478572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.294 [2024-10-07 11:31:43.478621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.294 [2024-10-07 11:31:43.478641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.294 [2024-10-07 11:31:43.478655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.294 [2024-10-07 11:31:43.478685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.294 [2024-10-07 11:31:43.482242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.309 [2024-10-07 11:31:43.482385] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.309 [2024-10-07 11:31:43.482418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.309 [2024-10-07 11:31:43.482436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.309 [2024-10-07 11:31:43.482688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.309 [2024-10-07 11:31:43.482780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.309 [2024-10-07 11:31:43.482810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.309 [2024-10-07 11:31:43.482826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.309 [2024-10-07 11:31:43.482859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.309 [2024-10-07 11:31:43.488470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.309 [2024-10-07 11:31:43.488593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.309 [2024-10-07 11:31:43.488624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.309 [2024-10-07 11:31:43.488642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.309 [2024-10-07 11:31:43.488674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.309 [2024-10-07 11:31:43.488706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.309 [2024-10-07 11:31:43.488725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.309 [2024-10-07 11:31:43.488739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.309 [2024-10-07 11:31:43.488768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.309 [2024-10-07 11:31:43.493241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.309 [2024-10-07 11:31:43.493363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.309 [2024-10-07 11:31:43.493395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.309 [2024-10-07 11:31:43.493413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.309 [2024-10-07 11:31:43.493445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.309 [2024-10-07 11:31:43.493488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.309 [2024-10-07 11:31:43.493509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.309 [2024-10-07 11:31:43.493523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.310 [2024-10-07 11:31:43.493571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.310 8750.00 IOPS, 34.18 MiB/s [2024-10-07T11:31:52.833Z] [2024-10-07 11:31:43.500672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.310 [2024-10-07 11:31:43.500884] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.310 [2024-10-07 11:31:43.500918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.310 [2024-10-07 11:31:43.500936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.310 [2024-10-07 11:31:43.501055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.310 [2024-10-07 11:31:43.501118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.310 [2024-10-07 11:31:43.501141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.310 [2024-10-07 11:31:43.501156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.310 [2024-10-07 11:31:43.501188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.310 [2024-10-07 11:31:43.503342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.310 [2024-10-07 11:31:43.503451] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.310 [2024-10-07 11:31:43.503489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.310 [2024-10-07 11:31:43.503508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.310 [2024-10-07 11:31:43.504075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.310 [2024-10-07 11:31:43.504272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.310 [2024-10-07 11:31:43.504308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.310 [2024-10-07 11:31:43.504339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.310 [2024-10-07 11:31:43.504448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.310 [2024-10-07 11:31:43.511648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.310 [2024-10-07 11:31:43.511761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.310 [2024-10-07 11:31:43.511792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.310 [2024-10-07 11:31:43.511810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.310 [2024-10-07 11:31:43.511842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.310 [2024-10-07 11:31:43.511874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.310 [2024-10-07 11:31:43.511892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.310 [2024-10-07 11:31:43.511907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.310 [2024-10-07 11:31:43.511936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.310 [2024-10-07 11:31:43.513845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.310 [2024-10-07 11:31:43.513953] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.310 [2024-10-07 11:31:43.513984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.310 [2024-10-07 11:31:43.514017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.310 [2024-10-07 11:31:43.514050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.310 [2024-10-07 11:31:43.514083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.310 [2024-10-07 11:31:43.514101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.310 [2024-10-07 11:31:43.514115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.310 [2024-10-07 11:31:43.514156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.310 [2024-10-07 11:31:43.521742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.310 [2024-10-07 11:31:43.521856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.310 [2024-10-07 11:31:43.521887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.310 [2024-10-07 11:31:43.521904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.310 [2024-10-07 11:31:43.521936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.310 [2024-10-07 11:31:43.521968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.310 [2024-10-07 11:31:43.521986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.310 [2024-10-07 11:31:43.522000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.310 [2024-10-07 11:31:43.522031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.310 [2024-10-07 11:31:43.525628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.310 [2024-10-07 11:31:43.525740] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.310 [2024-10-07 11:31:43.525771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.310 [2024-10-07 11:31:43.525788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.310 [2024-10-07 11:31:43.525821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.310 [2024-10-07 11:31:43.525853] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.310 [2024-10-07 11:31:43.525871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.310 [2024-10-07 11:31:43.525885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.310 [2024-10-07 11:31:43.525915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.310 [2024-10-07 11:31:43.532277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.310 [2024-10-07 11:31:43.532409] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.310 [2024-10-07 11:31:43.532451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.310 [2024-10-07 11:31:43.532470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.310 [2024-10-07 11:31:43.532503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.310 [2024-10-07 11:31:43.532536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.310 [2024-10-07 11:31:43.532571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.310 [2024-10-07 11:31:43.532586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.310 [2024-10-07 11:31:43.532618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.310 [2024-10-07 11:31:43.535718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.310 [2024-10-07 11:31:43.535827] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.310 [2024-10-07 11:31:43.535858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.310 [2024-10-07 11:31:43.535876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.310 [2024-10-07 11:31:43.535907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.310 [2024-10-07 11:31:43.535939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.310 [2024-10-07 11:31:43.535956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.310 [2024-10-07 11:31:43.535971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.310 [2024-10-07 11:31:43.536001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.310 [2024-10-07 11:31:43.542380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.310 [2024-10-07 11:31:43.542501] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.310 [2024-10-07 11:31:43.542532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.310 [2024-10-07 11:31:43.542551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.310 [2024-10-07 11:31:43.542583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.310 [2024-10-07 11:31:43.542615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.310 [2024-10-07 11:31:43.542633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.310 [2024-10-07 11:31:43.542647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.310 [2024-10-07 11:31:43.542677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.310 [2024-10-07 11:31:43.545806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.310 [2024-10-07 11:31:43.545917] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.310 [2024-10-07 11:31:43.545948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.310 [2024-10-07 11:31:43.545966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.310 [2024-10-07 11:31:43.545997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.310 [2024-10-07 11:31:43.546030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.310 [2024-10-07 11:31:43.546048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.310 [2024-10-07 11:31:43.546063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.310 [2024-10-07 11:31:43.546666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.310 [2024-10-07 11:31:43.554411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.310 [2024-10-07 11:31:43.554594] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.310 [2024-10-07 11:31:43.554627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.310 [2024-10-07 11:31:43.554645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.310 [2024-10-07 11:31:43.554678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.310 [2024-10-07 11:31:43.554711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.310 [2024-10-07 11:31:43.554730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.310 [2024-10-07 11:31:43.554744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.310 [2024-10-07 11:31:43.554775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.310 [2024-10-07 11:31:43.556728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.310 [2024-10-07 11:31:43.556837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.310 [2024-10-07 11:31:43.556868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.310 [2024-10-07 11:31:43.556890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.310 [2024-10-07 11:31:43.556922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.310 [2024-10-07 11:31:43.556955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.310 [2024-10-07 11:31:43.556972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.310 [2024-10-07 11:31:43.556987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.310 [2024-10-07 11:31:43.557016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.310 [2024-10-07 11:31:43.564553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.310 [2024-10-07 11:31:43.564665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.310 [2024-10-07 11:31:43.564696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.310 [2024-10-07 11:31:43.564713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.310 [2024-10-07 11:31:43.564745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.310 [2024-10-07 11:31:43.564777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.310 [2024-10-07 11:31:43.564796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.310 [2024-10-07 11:31:43.564810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.310 [2024-10-07 11:31:43.564841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.310 [2024-10-07 11:31:43.568616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.310 [2024-10-07 11:31:43.568727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.310 [2024-10-07 11:31:43.568758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.310 [2024-10-07 11:31:43.568776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.310 [2024-10-07 11:31:43.568827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.310 [2024-10-07 11:31:43.568861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.310 [2024-10-07 11:31:43.568879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.310 [2024-10-07 11:31:43.568893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.310 [2024-10-07 11:31:43.568928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.310 [2024-10-07 11:31:43.575349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.310 [2024-10-07 11:31:43.575470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.310 [2024-10-07 11:31:43.575502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.310 [2024-10-07 11:31:43.575520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.310 [2024-10-07 11:31:43.575553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.310 [2024-10-07 11:31:43.575586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.310 [2024-10-07 11:31:43.575605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.310 [2024-10-07 11:31:43.575619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.310 [2024-10-07 11:31:43.575649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.310 [2024-10-07 11:31:43.578709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.310 [2024-10-07 11:31:43.578822] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.311 [2024-10-07 11:31:43.578853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.311 [2024-10-07 11:31:43.578870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.311 [2024-10-07 11:31:43.578901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.311 [2024-10-07 11:31:43.578934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.311 [2024-10-07 11:31:43.578952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.311 [2024-10-07 11:31:43.578966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.311 [2024-10-07 11:31:43.578996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.311 [2024-10-07 11:31:43.585443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.311 [2024-10-07 11:31:43.585585] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.311 [2024-10-07 11:31:43.585622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.311 [2024-10-07 11:31:43.585640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.311 [2024-10-07 11:31:43.585672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.311 [2024-10-07 11:31:43.585704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.311 [2024-10-07 11:31:43.585722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.311 [2024-10-07 11:31:43.585754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.311 [2024-10-07 11:31:43.585957] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.311 [2024-10-07 11:31:43.589501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.311 [2024-10-07 11:31:43.589623] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.311 [2024-10-07 11:31:43.589655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.311 [2024-10-07 11:31:43.589672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.311 [2024-10-07 11:31:43.589705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.311 [2024-10-07 11:31:43.589738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.311 [2024-10-07 11:31:43.589756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.311 [2024-10-07 11:31:43.589770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.311 [2024-10-07 11:31:43.589801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.311 [2024-10-07 11:31:43.596714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.311 [2024-10-07 11:31:43.596827] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.311 [2024-10-07 11:31:43.596859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.311 [2024-10-07 11:31:43.596877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.311 [2024-10-07 11:31:43.596909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.311 [2024-10-07 11:31:43.596941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.311 [2024-10-07 11:31:43.596959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.311 [2024-10-07 11:31:43.596973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.311 [2024-10-07 11:31:43.597004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.311 [2024-10-07 11:31:43.600214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.311 [2024-10-07 11:31:43.600341] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.311 [2024-10-07 11:31:43.600373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.311 [2024-10-07 11:31:43.600391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.311 [2024-10-07 11:31:43.600575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.311 [2024-10-07 11:31:43.600647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.311 [2024-10-07 11:31:43.600671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.311 [2024-10-07 11:31:43.600685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.311 [2024-10-07 11:31:43.600716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.311 [2024-10-07 11:31:43.606802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.311 [2024-10-07 11:31:43.606937] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.311 [2024-10-07 11:31:43.606969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.311 [2024-10-07 11:31:43.606987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.311 [2024-10-07 11:31:43.607019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.311 [2024-10-07 11:31:43.607052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.311 [2024-10-07 11:31:43.607070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.311 [2024-10-07 11:31:43.607085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.311 [2024-10-07 11:31:43.607115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.311 [2024-10-07 11:31:43.610563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.311 [2024-10-07 11:31:43.610677] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.311 [2024-10-07 11:31:43.610708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.311 [2024-10-07 11:31:43.610726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.311 [2024-10-07 11:31:43.610758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.311 [2024-10-07 11:31:43.610790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.311 [2024-10-07 11:31:43.610808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.311 [2024-10-07 11:31:43.610822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.311 [2024-10-07 11:31:43.610852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.311 [2024-10-07 11:31:43.616906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.311 [2024-10-07 11:31:43.617018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.311 [2024-10-07 11:31:43.617049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.311 [2024-10-07 11:31:43.617066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.311 [2024-10-07 11:31:43.617786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.311 [2024-10-07 11:31:43.618403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.311 [2024-10-07 11:31:43.618443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.311 [2024-10-07 11:31:43.618460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.311 [2024-10-07 11:31:43.618694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.311 [2024-10-07 11:31:43.620659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.311 [2024-10-07 11:31:43.620770] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.311 [2024-10-07 11:31:43.620800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.311 [2024-10-07 11:31:43.620817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.311 [2024-10-07 11:31:43.620849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.311 [2024-10-07 11:31:43.620913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.311 [2024-10-07 11:31:43.620936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.311 [2024-10-07 11:31:43.620950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.311 [2024-10-07 11:31:43.620980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.311 [2024-10-07 11:31:43.626995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.311 [2024-10-07 11:31:43.627108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.311 [2024-10-07 11:31:43.627140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.311 [2024-10-07 11:31:43.627158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.311 [2024-10-07 11:31:43.627189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.311 [2024-10-07 11:31:43.627221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.311 [2024-10-07 11:31:43.627239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.311 [2024-10-07 11:31:43.627254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.311 [2024-10-07 11:31:43.627283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.311 [2024-10-07 11:31:43.632945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.311 [2024-10-07 11:31:43.633164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.311 [2024-10-07 11:31:43.633197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.311 [2024-10-07 11:31:43.633215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.311 [2024-10-07 11:31:43.633247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.311 [2024-10-07 11:31:43.633280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.311 [2024-10-07 11:31:43.633298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.311 [2024-10-07 11:31:43.633312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.311 [2024-10-07 11:31:43.633361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.311 [2024-10-07 11:31:43.637089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.311 [2024-10-07 11:31:43.637199] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.311 [2024-10-07 11:31:43.637230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.311 [2024-10-07 11:31:43.637248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.311 [2024-10-07 11:31:43.637279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.311 [2024-10-07 11:31:43.637312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.311 [2024-10-07 11:31:43.637350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.311 [2024-10-07 11:31:43.637365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.311 [2024-10-07 11:31:43.637412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.311 [2024-10-07 11:31:43.643039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.311 [2024-10-07 11:31:43.643153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.311 [2024-10-07 11:31:43.643184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.311 [2024-10-07 11:31:43.643202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.311 [2024-10-07 11:31:43.643234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.311 [2024-10-07 11:31:43.643266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.311 [2024-10-07 11:31:43.643284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.311 [2024-10-07 11:31:43.643298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.311 [2024-10-07 11:31:43.643342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.311 [2024-10-07 11:31:43.647178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.311 [2024-10-07 11:31:43.647290] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.311 [2024-10-07 11:31:43.647335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.311 [2024-10-07 11:31:43.647355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.311 [2024-10-07 11:31:43.647388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.312 [2024-10-07 11:31:43.647420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.312 [2024-10-07 11:31:43.647438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.312 [2024-10-07 11:31:43.647452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.312 [2024-10-07 11:31:43.647482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.312 [2024-10-07 11:31:43.654300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.312 [2024-10-07 11:31:43.654429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.312 [2024-10-07 11:31:43.654460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.312 [2024-10-07 11:31:43.654477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.312 [2024-10-07 11:31:43.654509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.312 [2024-10-07 11:31:43.654541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.312 [2024-10-07 11:31:43.654560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.312 [2024-10-07 11:31:43.654574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.312 [2024-10-07 11:31:43.654604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.312 [2024-10-07 11:31:43.657602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.312 [2024-10-07 11:31:43.657711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.312 [2024-10-07 11:31:43.657741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.312 [2024-10-07 11:31:43.657776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.312 [2024-10-07 11:31:43.657810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.312 [2024-10-07 11:31:43.657842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.312 [2024-10-07 11:31:43.657860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.312 [2024-10-07 11:31:43.657874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.312 [2024-10-07 11:31:43.657904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.312 [2024-10-07 11:31:43.666238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.312 [2024-10-07 11:31:43.666373] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.312 [2024-10-07 11:31:43.666406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.312 [2024-10-07 11:31:43.666424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.312 [2024-10-07 11:31:43.666457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.312 [2024-10-07 11:31:43.666489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.312 [2024-10-07 11:31:43.666507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.312 [2024-10-07 11:31:43.666521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.312 [2024-10-07 11:31:43.666551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.312 [2024-10-07 11:31:43.668470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.312 [2024-10-07 11:31:43.668579] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.312 [2024-10-07 11:31:43.668609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.312 [2024-10-07 11:31:43.668626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.312 [2024-10-07 11:31:43.668658] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.312 [2024-10-07 11:31:43.668690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.312 [2024-10-07 11:31:43.668708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.312 [2024-10-07 11:31:43.668722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.312 [2024-10-07 11:31:43.668752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.312 [2024-10-07 11:31:43.676350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.312 [2024-10-07 11:31:43.676461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.312 [2024-10-07 11:31:43.676493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.312 [2024-10-07 11:31:43.676510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.312 [2024-10-07 11:31:43.676542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.312 [2024-10-07 11:31:43.676574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.312 [2024-10-07 11:31:43.676608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.312 [2024-10-07 11:31:43.676624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.312 [2024-10-07 11:31:43.676655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.312 [2024-10-07 11:31:43.680303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.312 [2024-10-07 11:31:43.680429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.312 [2024-10-07 11:31:43.680461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.312 [2024-10-07 11:31:43.680478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.312 [2024-10-07 11:31:43.680510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.312 [2024-10-07 11:31:43.680542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.312 [2024-10-07 11:31:43.680561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.312 [2024-10-07 11:31:43.680575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.312 [2024-10-07 11:31:43.680606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.312 [2024-10-07 11:31:43.686950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.312 [2024-10-07 11:31:43.687069] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.312 [2024-10-07 11:31:43.687100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.312 [2024-10-07 11:31:43.687117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.322 [2024-10-07 11:31:43.687165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.322 [2024-10-07 11:31:43.687201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.322 [2024-10-07 11:31:43.687219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.322 [2024-10-07 11:31:43.687233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.322 [2024-10-07 11:31:43.687262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.322 [2024-10-07 11:31:43.690405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.322 [2024-10-07 11:31:43.690516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.322 [2024-10-07 11:31:43.690547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.322 [2024-10-07 11:31:43.690564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.322 [2024-10-07 11:31:43.690595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.322 [2024-10-07 11:31:43.690627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.322 [2024-10-07 11:31:43.690645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.322 [2024-10-07 11:31:43.690659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.322 [2024-10-07 11:31:43.690689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.322 [2024-10-07 11:31:43.697042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.322 [2024-10-07 11:31:43.697157] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.322 [2024-10-07 11:31:43.697189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.322 [2024-10-07 11:31:43.697206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.322 [2024-10-07 11:31:43.697238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.322 [2024-10-07 11:31:43.697270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.322 [2024-10-07 11:31:43.697288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.322 [2024-10-07 11:31:43.697302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.322 [2024-10-07 11:31:43.697522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.322 [2024-10-07 11:31:43.701062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.322 [2024-10-07 11:31:43.701184] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.322 [2024-10-07 11:31:43.701215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.322 [2024-10-07 11:31:43.701232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.322 [2024-10-07 11:31:43.701264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.322 [2024-10-07 11:31:43.701296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.322 [2024-10-07 11:31:43.701327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.322 [2024-10-07 11:31:43.701345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.322 [2024-10-07 11:31:43.701378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.322 [2024-10-07 11:31:43.708407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.322 [2024-10-07 11:31:43.708557] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.322 [2024-10-07 11:31:43.708589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.322 [2024-10-07 11:31:43.708607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.322 [2024-10-07 11:31:43.708639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.322 [2024-10-07 11:31:43.708672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.322 [2024-10-07 11:31:43.708690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.322 [2024-10-07 11:31:43.708703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.322 [2024-10-07 11:31:43.708734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.322 [2024-10-07 11:31:43.711291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.322 [2024-10-07 11:31:43.711416] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.322 [2024-10-07 11:31:43.711448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.322 [2024-10-07 11:31:43.711465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.322 [2024-10-07 11:31:43.711515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.322 [2024-10-07 11:31:43.711548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.322 [2024-10-07 11:31:43.711566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.322 [2024-10-07 11:31:43.711580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.322 [2024-10-07 11:31:43.711610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.322 [2024-10-07 11:31:43.718692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.322 [2024-10-07 11:31:43.718807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.322 [2024-10-07 11:31:43.718838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.322 [2024-10-07 11:31:43.718855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.322 [2024-10-07 11:31:43.718888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.322 [2024-10-07 11:31:43.718920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.322 [2024-10-07 11:31:43.718938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.322 [2024-10-07 11:31:43.718953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.322 [2024-10-07 11:31:43.719858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.322 [2024-10-07 11:31:43.722868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.322 [2024-10-07 11:31:43.724156] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.322 [2024-10-07 11:31:43.724200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.322 [2024-10-07 11:31:43.724220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.322 [2024-10-07 11:31:43.724371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.322 [2024-10-07 11:31:43.724413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.322 [2024-10-07 11:31:43.724432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.322 [2024-10-07 11:31:43.724446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.322 [2024-10-07 11:31:43.724477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.322 [2024-10-07 11:31:43.729553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.322 [2024-10-07 11:31:43.729668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.322 [2024-10-07 11:31:43.729700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.322 [2024-10-07 11:31:43.729717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.322 [2024-10-07 11:31:43.729749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.322 [2024-10-07 11:31:43.729781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.322 [2024-10-07 11:31:43.729799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.322 [2024-10-07 11:31:43.729830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.322 [2024-10-07 11:31:43.729863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.322 [2024-10-07 11:31:43.732957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.322 [2024-10-07 11:31:43.733068] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.322 [2024-10-07 11:31:43.733099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.322 [2024-10-07 11:31:43.733117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.322 [2024-10-07 11:31:43.734020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.322 [2024-10-07 11:31:43.734228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.322 [2024-10-07 11:31:43.734264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.322 [2024-10-07 11:31:43.734282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.322 [2024-10-07 11:31:43.734389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.322 [2024-10-07 11:31:43.740459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.322 [2024-10-07 11:31:43.740574] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.322 [2024-10-07 11:31:43.740606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.323 [2024-10-07 11:31:43.740623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.323 [2024-10-07 11:31:43.740656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.323 [2024-10-07 11:31:43.740688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.323 [2024-10-07 11:31:43.740706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.323 [2024-10-07 11:31:43.740720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.323 [2024-10-07 11:31:43.740750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.323 [2024-10-07 11:31:43.743682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.323 [2024-10-07 11:31:43.743796] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.323 [2024-10-07 11:31:43.743827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.323 [2024-10-07 11:31:43.743845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.323 [2024-10-07 11:31:43.743877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.323 [2024-10-07 11:31:43.743910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.323 [2024-10-07 11:31:43.743928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.323 [2024-10-07 11:31:43.743942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.323 [2024-10-07 11:31:43.743972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.323 [2024-10-07 11:31:43.752283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.323 [2024-10-07 11:31:43.752488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.323 [2024-10-07 11:31:43.752521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.323 [2024-10-07 11:31:43.752539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.323 [2024-10-07 11:31:43.752572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.323 [2024-10-07 11:31:43.752605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.323 [2024-10-07 11:31:43.752624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.323 [2024-10-07 11:31:43.752638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.323 [2024-10-07 11:31:43.752669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.323 [2024-10-07 11:31:43.754580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.323 [2024-10-07 11:31:43.754691] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.323 [2024-10-07 11:31:43.754722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.323 [2024-10-07 11:31:43.754740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.323 [2024-10-07 11:31:43.754772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.323 [2024-10-07 11:31:43.754803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.323 [2024-10-07 11:31:43.754822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.323 [2024-10-07 11:31:43.754837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.323 [2024-10-07 11:31:43.754866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.323 [2024-10-07 11:31:43.762865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.323 [2024-10-07 11:31:43.762988] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.323 [2024-10-07 11:31:43.763020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.323 [2024-10-07 11:31:43.763038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.323 [2024-10-07 11:31:43.764226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.323 [2024-10-07 11:31:43.764482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.323 [2024-10-07 11:31:43.764519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.323 [2024-10-07 11:31:43.764537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.323 [2024-10-07 11:31:43.765357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.323 [2024-10-07 11:31:43.766736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.323 [2024-10-07 11:31:43.766847] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.323 [2024-10-07 11:31:43.766878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.323 [2024-10-07 11:31:43.766896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.323 [2024-10-07 11:31:43.766928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.323 [2024-10-07 11:31:43.766979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.323 [2024-10-07 11:31:43.766998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.323 [2024-10-07 11:31:43.767012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.323 [2024-10-07 11:31:43.767043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.323 [2024-10-07 11:31:43.773048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.323 [2024-10-07 11:31:43.773162] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.323 [2024-10-07 11:31:43.773194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.323 [2024-10-07 11:31:43.773212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.323 [2024-10-07 11:31:43.773244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.323 [2024-10-07 11:31:43.773277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.323 [2024-10-07 11:31:43.773295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.323 [2024-10-07 11:31:43.773309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.323 [2024-10-07 11:31:43.773355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.323 [2024-10-07 11:31:43.776828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.323 [2024-10-07 11:31:43.776939] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.323 [2024-10-07 11:31:43.776970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.323 [2024-10-07 11:31:43.776988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.323 [2024-10-07 11:31:43.778173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.323 [2024-10-07 11:31:43.778447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.323 [2024-10-07 11:31:43.778485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.323 [2024-10-07 11:31:43.778503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.323 [2024-10-07 11:31:43.779306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.323 [2024-10-07 11:31:43.783139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.323 [2024-10-07 11:31:43.783249] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.323 [2024-10-07 11:31:43.783281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.323 [2024-10-07 11:31:43.783298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.323 [2024-10-07 11:31:43.783345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.323 [2024-10-07 11:31:43.783381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.323 [2024-10-07 11:31:43.783399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.323 [2024-10-07 11:31:43.783413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.323 [2024-10-07 11:31:43.783616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.323 [2024-10-07 11:31:43.787133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.323 [2024-10-07 11:31:43.787247] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.323 [2024-10-07 11:31:43.787278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.323 [2024-10-07 11:31:43.787296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.323 [2024-10-07 11:31:43.787342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.323 [2024-10-07 11:31:43.787378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.323 [2024-10-07 11:31:43.787396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.323 [2024-10-07 11:31:43.787411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.323 [2024-10-07 11:31:43.787440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.323 [2024-10-07 11:31:43.794537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.323 [2024-10-07 11:31:43.794657] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.323 [2024-10-07 11:31:43.794689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.323 [2024-10-07 11:31:43.794707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.323 [2024-10-07 11:31:43.794739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.323 [2024-10-07 11:31:43.794772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.323 [2024-10-07 11:31:43.794790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.323 [2024-10-07 11:31:43.794805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.323 [2024-10-07 11:31:43.794834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.324 [2024-10-07 11:31:43.797226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.324 [2024-10-07 11:31:43.797349] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.324 [2024-10-07 11:31:43.797380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.324 [2024-10-07 11:31:43.797398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.324 [2024-10-07 11:31:43.797431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.324 [2024-10-07 11:31:43.797463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.324 [2024-10-07 11:31:43.797482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.324 [2024-10-07 11:31:43.797496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.324 [2024-10-07 11:31:43.797526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.324 [2024-10-07 11:31:43.804634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.324 [2024-10-07 11:31:43.804758] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.324 [2024-10-07 11:31:43.804790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.324 [2024-10-07 11:31:43.804830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.324 [2024-10-07 11:31:43.804865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.324 [2024-10-07 11:31:43.804898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.324 [2024-10-07 11:31:43.804917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.324 [2024-10-07 11:31:43.804932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.324 [2024-10-07 11:31:43.804962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.324 [2024-10-07 11:31:43.808711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.324 [2024-10-07 11:31:43.808826] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.324 [2024-10-07 11:31:43.808857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.324 [2024-10-07 11:31:43.808875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.324 [2024-10-07 11:31:43.808907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.324 [2024-10-07 11:31:43.808940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.324 [2024-10-07 11:31:43.808958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.324 [2024-10-07 11:31:43.808973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.324 [2024-10-07 11:31:43.809003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.324 [2024-10-07 11:31:43.815390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.324 [2024-10-07 11:31:43.815516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.324 [2024-10-07 11:31:43.815548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.324 [2024-10-07 11:31:43.815566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.324 [2024-10-07 11:31:43.815599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.324 [2024-10-07 11:31:43.815631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.324 [2024-10-07 11:31:43.815649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.324 [2024-10-07 11:31:43.815663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.324 [2024-10-07 11:31:43.815693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.324 [2024-10-07 11:31:43.818804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.324 [2024-10-07 11:31:43.818917] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.324 [2024-10-07 11:31:43.818948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.324 [2024-10-07 11:31:43.818966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.324 [2024-10-07 11:31:43.818998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.324 [2024-10-07 11:31:43.819030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.324 [2024-10-07 11:31:43.819070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.324 [2024-10-07 11:31:43.819086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.324 [2024-10-07 11:31:43.819118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.324 [2024-10-07 11:31:43.825502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.324 [2024-10-07 11:31:43.825618] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.324 [2024-10-07 11:31:43.825650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.324 [2024-10-07 11:31:43.825667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.324 [2024-10-07 11:31:43.825700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.324 [2024-10-07 11:31:43.825892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.324 [2024-10-07 11:31:43.825919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.324 [2024-10-07 11:31:43.825934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.324 [2024-10-07 11:31:43.826066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.324 [2024-10-07 11:31:43.829456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.324 [2024-10-07 11:31:43.829575] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.324 [2024-10-07 11:31:43.829607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.324 [2024-10-07 11:31:43.829624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.324 [2024-10-07 11:31:43.829672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.324 [2024-10-07 11:31:43.829709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.324 [2024-10-07 11:31:43.829737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.324 [2024-10-07 11:31:43.829752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.324 [2024-10-07 11:31:43.829784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.324 [2024-10-07 11:31:43.836869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.324 [2024-10-07 11:31:43.836986] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.324 [2024-10-07 11:31:43.837017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.324 [2024-10-07 11:31:43.837035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.324 [2024-10-07 11:31:43.837067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.324 [2024-10-07 11:31:43.837099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.324 [2024-10-07 11:31:43.837117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.324 [2024-10-07 11:31:43.837132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.324 [2024-10-07 11:31:43.837162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.324 [2024-10-07 11:31:43.839549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.324 [2024-10-07 11:31:43.839662] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.324 [2024-10-07 11:31:43.839693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.324 [2024-10-07 11:31:43.839711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.324 [2024-10-07 11:31:43.839742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.324 [2024-10-07 11:31:43.839776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.324 [2024-10-07 11:31:43.839804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.324 [2024-10-07 11:31:43.839820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.324 [2024-10-07 11:31:43.840007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.324 [2024-10-07 11:31:43.846962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.324 [2024-10-07 11:31:43.847077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.324 [2024-10-07 11:31:43.847109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.324 [2024-10-07 11:31:43.847126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.324 [2024-10-07 11:31:43.847158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.324 [2024-10-07 11:31:43.847191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.324 [2024-10-07 11:31:43.847208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.324 [2024-10-07 11:31:43.847223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.324 [2024-10-07 11:31:43.847253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.324 [2024-10-07 11:31:43.850947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.324 [2024-10-07 11:31:43.851068] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.324 [2024-10-07 11:31:43.851100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.324 [2024-10-07 11:31:43.851118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.325 [2024-10-07 11:31:43.851151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.325 [2024-10-07 11:31:43.851183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.325 [2024-10-07 11:31:43.851202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.325 [2024-10-07 11:31:43.851217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.325 [2024-10-07 11:31:43.851248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.325 [2024-10-07 11:31:43.857561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.325 [2024-10-07 11:31:43.857685] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.325 [2024-10-07 11:31:43.857716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.325 [2024-10-07 11:31:43.857734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.325 [2024-10-07 11:31:43.857786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.325 [2024-10-07 11:31:43.857819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.325 [2024-10-07 11:31:43.857837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.325 [2024-10-07 11:31:43.857851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.325 [2024-10-07 11:31:43.857882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.325 [2024-10-07 11:31:43.861044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.325 [2024-10-07 11:31:43.861157] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.325 [2024-10-07 11:31:43.861189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.325 [2024-10-07 11:31:43.861206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.325 [2024-10-07 11:31:43.861238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.325 [2024-10-07 11:31:43.861270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.325 [2024-10-07 11:31:43.861288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.325 [2024-10-07 11:31:43.861303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.325 [2024-10-07 11:31:43.861348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.325 [2024-10-07 11:31:43.867654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.325 [2024-10-07 11:31:43.867771] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.325 [2024-10-07 11:31:43.867803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.325 [2024-10-07 11:31:43.867821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.325 [2024-10-07 11:31:43.867853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.325 [2024-10-07 11:31:43.867885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.325 [2024-10-07 11:31:43.867903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.325 [2024-10-07 11:31:43.867918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.325 [2024-10-07 11:31:43.867948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.325 [2024-10-07 11:31:43.871671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.325 [2024-10-07 11:31:43.871795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.325 [2024-10-07 11:31:43.871827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.325 [2024-10-07 11:31:43.871845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.325 [2024-10-07 11:31:43.871878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.325 [2024-10-07 11:31:43.871910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.325 [2024-10-07 11:31:43.871929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.325 [2024-10-07 11:31:43.871970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.325 [2024-10-07 11:31:43.872003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.325 [2024-10-07 11:31:43.879144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.325 [2024-10-07 11:31:43.879263] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.325 [2024-10-07 11:31:43.879295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.325 [2024-10-07 11:31:43.879312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.325 [2024-10-07 11:31:43.879361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.325 [2024-10-07 11:31:43.879394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.325 [2024-10-07 11:31:43.879412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.325 [2024-10-07 11:31:43.879426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.325 [2024-10-07 11:31:43.879456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.325 [2024-10-07 11:31:43.881761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.325 [2024-10-07 11:31:43.881874] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.325 [2024-10-07 11:31:43.881905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.325 [2024-10-07 11:31:43.881923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.325 [2024-10-07 11:31:43.881970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.325 [2024-10-07 11:31:43.882005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.325 [2024-10-07 11:31:43.882025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.325 [2024-10-07 11:31:43.882040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.325 [2024-10-07 11:31:43.882224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.325 [2024-10-07 11:31:43.889242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.325 [2024-10-07 11:31:43.889369] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.325 [2024-10-07 11:31:43.889401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.325 [2024-10-07 11:31:43.889419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.325 [2024-10-07 11:31:43.889452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.325 [2024-10-07 11:31:43.889485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.325 [2024-10-07 11:31:43.889503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.325 [2024-10-07 11:31:43.889517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.325 [2024-10-07 11:31:43.889547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.326 [2024-10-07 11:31:43.893198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.326 [2024-10-07 11:31:43.893352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.326 [2024-10-07 11:31:43.893385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.326 [2024-10-07 11:31:43.893403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.326 [2024-10-07 11:31:43.893437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.326 [2024-10-07 11:31:43.893470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.326 [2024-10-07 11:31:43.893488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.326 [2024-10-07 11:31:43.893503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.326 [2024-10-07 11:31:43.893533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.326 [2024-10-07 11:31:43.899860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.326 [2024-10-07 11:31:43.899987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.326 [2024-10-07 11:31:43.900019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.326 [2024-10-07 11:31:43.900036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.326 [2024-10-07 11:31:43.900070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.326 [2024-10-07 11:31:43.900103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.326 [2024-10-07 11:31:43.900121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.326 [2024-10-07 11:31:43.900136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.326 [2024-10-07 11:31:43.900166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.326 [2024-10-07 11:31:43.903326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.326 [2024-10-07 11:31:43.903446] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.326 [2024-10-07 11:31:43.903477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.326 [2024-10-07 11:31:43.903495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.326 [2024-10-07 11:31:43.903528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.326 [2024-10-07 11:31:43.903560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.326 [2024-10-07 11:31:43.903579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.326 [2024-10-07 11:31:43.903593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.326 [2024-10-07 11:31:43.903624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.326 [2024-10-07 11:31:43.909955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.326 [2024-10-07 11:31:43.910071] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.326 [2024-10-07 11:31:43.910111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.326 [2024-10-07 11:31:43.910129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.326 [2024-10-07 11:31:43.910162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.326 [2024-10-07 11:31:43.910218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.326 [2024-10-07 11:31:43.910239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.326 [2024-10-07 11:31:43.910253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.326 [2024-10-07 11:31:43.910467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.326 [2024-10-07 11:31:43.913986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.326 [2024-10-07 11:31:43.914115] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.326 [2024-10-07 11:31:43.914148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.326 [2024-10-07 11:31:43.914165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.326 [2024-10-07 11:31:43.914203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.326 [2024-10-07 11:31:43.914236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.326 [2024-10-07 11:31:43.914255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.326 [2024-10-07 11:31:43.914270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.326 [2024-10-07 11:31:43.914314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.326 [2024-10-07 11:31:43.921447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.326 [2024-10-07 11:31:43.921567] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.326 [2024-10-07 11:31:43.921599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.326 [2024-10-07 11:31:43.921617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.326 [2024-10-07 11:31:43.921650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.326 [2024-10-07 11:31:43.921683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.326 [2024-10-07 11:31:43.921701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.326 [2024-10-07 11:31:43.921716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.326 [2024-10-07 11:31:43.921746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.326 [2024-10-07 11:31:43.924085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.326 [2024-10-07 11:31:43.924198] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.326 [2024-10-07 11:31:43.924229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.326 [2024-10-07 11:31:43.924247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.326 [2024-10-07 11:31:43.924278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.326 [2024-10-07 11:31:43.924311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.326 [2024-10-07 11:31:43.924345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.326 [2024-10-07 11:31:43.924360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.326 [2024-10-07 11:31:43.924569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.326 [2024-10-07 11:31:43.931544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.326 [2024-10-07 11:31:43.931660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.326 [2024-10-07 11:31:43.931692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.326 [2024-10-07 11:31:43.931709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.326 [2024-10-07 11:31:43.931742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.326 [2024-10-07 11:31:43.931774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.326 [2024-10-07 11:31:43.931792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.326 [2024-10-07 11:31:43.931807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.326 [2024-10-07 11:31:43.931837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.326 [2024-10-07 11:31:43.935539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.326 [2024-10-07 11:31:43.935654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.326 [2024-10-07 11:31:43.935685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.326 [2024-10-07 11:31:43.935703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.326 [2024-10-07 11:31:43.935736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.326 [2024-10-07 11:31:43.935769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.326 [2024-10-07 11:31:43.935787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.326 [2024-10-07 11:31:43.935802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.326 [2024-10-07 11:31:43.935831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.326 [2024-10-07 11:31:43.942212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.326 [2024-10-07 11:31:43.942380] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.326 [2024-10-07 11:31:43.942423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.326 [2024-10-07 11:31:43.942443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.326 [2024-10-07 11:31:43.942478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.326 [2024-10-07 11:31:43.942511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.326 [2024-10-07 11:31:43.942529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.326 [2024-10-07 11:31:43.942544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.326 [2024-10-07 11:31:43.942575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.326 [2024-10-07 11:31:43.945628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.326 [2024-10-07 11:31:43.945739] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.326 [2024-10-07 11:31:43.945775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.326 [2024-10-07 11:31:43.945820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.327 [2024-10-07 11:31:43.945854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.327 [2024-10-07 11:31:43.945888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.327 [2024-10-07 11:31:43.945907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.327 [2024-10-07 11:31:43.945921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.327 [2024-10-07 11:31:43.945951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.327 [2024-10-07 11:31:43.952303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.327 [2024-10-07 11:31:43.952433] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.327 [2024-10-07 11:31:43.952469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.327 [2024-10-07 11:31:43.952488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.327 [2024-10-07 11:31:43.952520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.327 [2024-10-07 11:31:43.952552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.327 [2024-10-07 11:31:43.952570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.327 [2024-10-07 11:31:43.952584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.327 [2024-10-07 11:31:43.952614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.327 [2024-10-07 11:31:43.956363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.327 [2024-10-07 11:31:43.956487] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.327 [2024-10-07 11:31:43.956520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.327 [2024-10-07 11:31:43.956538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.327 [2024-10-07 11:31:43.956571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.327 [2024-10-07 11:31:43.956604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.327 [2024-10-07 11:31:43.956623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.327 [2024-10-07 11:31:43.956637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.327 [2024-10-07 11:31:43.956667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.327 [2024-10-07 11:31:43.963752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.327 [2024-10-07 11:31:43.963870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.327 [2024-10-07 11:31:43.963903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.327 [2024-10-07 11:31:43.963921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.327 [2024-10-07 11:31:43.963953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.327 [2024-10-07 11:31:43.963986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.327 [2024-10-07 11:31:43.964028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.327 [2024-10-07 11:31:43.964043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.327 [2024-10-07 11:31:43.964075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.327 [2024-10-07 11:31:43.966455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.327 [2024-10-07 11:31:43.966566] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.327 [2024-10-07 11:31:43.966603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.327 [2024-10-07 11:31:43.966622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.327 [2024-10-07 11:31:43.966653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.327 [2024-10-07 11:31:43.966686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.327 [2024-10-07 11:31:43.966704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.327 [2024-10-07 11:31:43.966718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.327 [2024-10-07 11:31:43.966903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.327 [2024-10-07 11:31:43.973845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.327 [2024-10-07 11:31:43.973959] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.327 [2024-10-07 11:31:43.973990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.327 [2024-10-07 11:31:43.974008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.327 [2024-10-07 11:31:43.974040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.327 [2024-10-07 11:31:43.974072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.327 [2024-10-07 11:31:43.974090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.327 [2024-10-07 11:31:43.974104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.327 [2024-10-07 11:31:43.974134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.327 [2024-10-07 11:31:43.977816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.327 [2024-10-07 11:31:43.977927] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.327 [2024-10-07 11:31:43.977958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.327 [2024-10-07 11:31:43.977976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.327 [2024-10-07 11:31:43.978008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.327 [2024-10-07 11:31:43.978041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.327 [2024-10-07 11:31:43.978059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.327 [2024-10-07 11:31:43.978074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.327 [2024-10-07 11:31:43.978103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.327 [2024-10-07 11:31:43.984447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.327 [2024-10-07 11:31:43.984577] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.327 [2024-10-07 11:31:43.984609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.327 [2024-10-07 11:31:43.984627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.327 [2024-10-07 11:31:43.984659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.327 [2024-10-07 11:31:43.984692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.327 [2024-10-07 11:31:43.984709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.327 [2024-10-07 11:31:43.984723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.327 [2024-10-07 11:31:43.984753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.327 [2024-10-07 11:31:43.987906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.327 [2024-10-07 11:31:43.988017] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.327 [2024-10-07 11:31:43.988048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.327 [2024-10-07 11:31:43.988066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.327 [2024-10-07 11:31:43.988097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.327 [2024-10-07 11:31:43.988130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.327 [2024-10-07 11:31:43.988148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.327 [2024-10-07 11:31:43.988162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.327 [2024-10-07 11:31:43.988192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.328 [2024-10-07 11:31:43.994546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.328 [2024-10-07 11:31:43.994672] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.328 [2024-10-07 11:31:43.994703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.328 [2024-10-07 11:31:43.994721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.328 [2024-10-07 11:31:43.994753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.328 [2024-10-07 11:31:43.994785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.328 [2024-10-07 11:31:43.994803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.328 [2024-10-07 11:31:43.994819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.328 [2024-10-07 11:31:43.995012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.328 [2024-10-07 11:31:43.998526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.328 [2024-10-07 11:31:43.998653] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.328 [2024-10-07 11:31:43.998685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.328 [2024-10-07 11:31:43.998703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.328 [2024-10-07 11:31:43.998766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.328 [2024-10-07 11:31:43.998802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.328 [2024-10-07 11:31:43.998829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.328 [2024-10-07 11:31:43.998843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.328 [2024-10-07 11:31:43.998874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.328 [2024-10-07 11:31:44.005951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.328 [2024-10-07 11:31:44.006072] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.328 [2024-10-07 11:31:44.006105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.328 [2024-10-07 11:31:44.006122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.328 [2024-10-07 11:31:44.006154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.328 [2024-10-07 11:31:44.006187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.328 [2024-10-07 11:31:44.006205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.328 [2024-10-07 11:31:44.006229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.328 [2024-10-07 11:31:44.006259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.328 [2024-10-07 11:31:44.008623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.328 [2024-10-07 11:31:44.008733] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.328 [2024-10-07 11:31:44.008764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.328 [2024-10-07 11:31:44.008781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.328 [2024-10-07 11:31:44.008812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.328 [2024-10-07 11:31:44.008844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.328 [2024-10-07 11:31:44.008863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.328 [2024-10-07 11:31:44.008877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.328 [2024-10-07 11:31:44.009061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.328 [2024-10-07 11:31:44.016047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.328 [2024-10-07 11:31:44.016161] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.328 [2024-10-07 11:31:44.016192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.328 [2024-10-07 11:31:44.016210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.328 [2024-10-07 11:31:44.016241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.328 [2024-10-07 11:31:44.016273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.328 [2024-10-07 11:31:44.016291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.328 [2024-10-07 11:31:44.016343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.328 [2024-10-07 11:31:44.016377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.328 [2024-10-07 11:31:44.019999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.328 [2024-10-07 11:31:44.020114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.328 [2024-10-07 11:31:44.020146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.328 [2024-10-07 11:31:44.020164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.328 [2024-10-07 11:31:44.020196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.328 [2024-10-07 11:31:44.020227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.328 [2024-10-07 11:31:44.020245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.328 [2024-10-07 11:31:44.020260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.328 [2024-10-07 11:31:44.020290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.328 [2024-10-07 11:31:44.026685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.328 [2024-10-07 11:31:44.026807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.328 [2024-10-07 11:31:44.026838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.328 [2024-10-07 11:31:44.026855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.328 [2024-10-07 11:31:44.026887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.328 [2024-10-07 11:31:44.026920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.328 [2024-10-07 11:31:44.026938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.328 [2024-10-07 11:31:44.026952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.328 [2024-10-07 11:31:44.026982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.328 [2024-10-07 11:31:44.030091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.328 [2024-10-07 11:31:44.030199] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.328 [2024-10-07 11:31:44.030230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.328 [2024-10-07 11:31:44.030246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.328 [2024-10-07 11:31:44.030278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.328 [2024-10-07 11:31:44.030340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.328 [2024-10-07 11:31:44.030362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.328 [2024-10-07 11:31:44.030377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.328 [2024-10-07 11:31:44.030407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.328 [2024-10-07 11:31:44.036781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.328 [2024-10-07 11:31:44.036911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.328 [2024-10-07 11:31:44.036943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.328 [2024-10-07 11:31:44.036961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.328 [2024-10-07 11:31:44.036993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.328 [2024-10-07 11:31:44.037025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.328 [2024-10-07 11:31:44.037043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.328 [2024-10-07 11:31:44.037057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.328 [2024-10-07 11:31:44.037245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.328 [2024-10-07 11:31:44.040729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.328 [2024-10-07 11:31:44.040849] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.328 [2024-10-07 11:31:44.040881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.328 [2024-10-07 11:31:44.040898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.328 [2024-10-07 11:31:44.040930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.328 [2024-10-07 11:31:44.040962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.328 [2024-10-07 11:31:44.040980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.328 [2024-10-07 11:31:44.040994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.328 [2024-10-07 11:31:44.041024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.328 [2024-10-07 11:31:44.048122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.328 [2024-10-07 11:31:44.048237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.328 [2024-10-07 11:31:44.048268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.329 [2024-10-07 11:31:44.048286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.329 [2024-10-07 11:31:44.048332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.329 [2024-10-07 11:31:44.048367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.329 [2024-10-07 11:31:44.048386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.329 [2024-10-07 11:31:44.048400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.329 [2024-10-07 11:31:44.048430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.329 [2024-10-07 11:31:44.050837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.329 [2024-10-07 11:31:44.050946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.329 [2024-10-07 11:31:44.050977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.329 [2024-10-07 11:31:44.050994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.329 [2024-10-07 11:31:44.051026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.329 [2024-10-07 11:31:44.051228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.329 [2024-10-07 11:31:44.051255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.329 [2024-10-07 11:31:44.051270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.329 [2024-10-07 11:31:44.051427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.329 [2024-10-07 11:31:44.058214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.329 [2024-10-07 11:31:44.058352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.329 [2024-10-07 11:31:44.058385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.329 [2024-10-07 11:31:44.058402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.329 [2024-10-07 11:31:44.058436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.329 [2024-10-07 11:31:44.058469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.329 [2024-10-07 11:31:44.058487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.329 [2024-10-07 11:31:44.058502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.329 [2024-10-07 11:31:44.058531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.329 [2024-10-07 11:31:44.062072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.329 [2024-10-07 11:31:44.062193] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.329 [2024-10-07 11:31:44.062224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.329 [2024-10-07 11:31:44.062242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.329 [2024-10-07 11:31:44.062273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.329 [2024-10-07 11:31:44.062334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.329 [2024-10-07 11:31:44.062356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.329 [2024-10-07 11:31:44.062371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.329 [2024-10-07 11:31:44.062401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.329 [2024-10-07 11:31:44.068652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.329 [2024-10-07 11:31:44.068772] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.329 [2024-10-07 11:31:44.068804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.329 [2024-10-07 11:31:44.068821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.329 [2024-10-07 11:31:44.068853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.329 [2024-10-07 11:31:44.068889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.329 [2024-10-07 11:31:44.068907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.329 [2024-10-07 11:31:44.068921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.329 [2024-10-07 11:31:44.068969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.329 [2024-10-07 11:31:44.072159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.329 [2024-10-07 11:31:44.072272] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.329 [2024-10-07 11:31:44.072303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.329 [2024-10-07 11:31:44.072343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.329 [2024-10-07 11:31:44.072377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.329 [2024-10-07 11:31:44.072410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.329 [2024-10-07 11:31:44.072427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.329 [2024-10-07 11:31:44.072442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.329 [2024-10-07 11:31:44.072489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.329 [2024-10-07 11:31:44.078744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.329 [2024-10-07 11:31:44.078857] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.329 [2024-10-07 11:31:44.078888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.329 [2024-10-07 11:31:44.078906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.329 [2024-10-07 11:31:44.079092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.329 [2024-10-07 11:31:44.079234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.329 [2024-10-07 11:31:44.079269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.329 [2024-10-07 11:31:44.079287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.329 [2024-10-07 11:31:44.079418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.329 [2024-10-07 11:31:44.082520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.329 [2024-10-07 11:31:44.082640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.329 [2024-10-07 11:31:44.082671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.329 [2024-10-07 11:31:44.082688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.329 [2024-10-07 11:31:44.082720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.329 [2024-10-07 11:31:44.082752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.329 [2024-10-07 11:31:44.082770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.329 [2024-10-07 11:31:44.082785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.329 [2024-10-07 11:31:44.082814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.329 [2024-10-07 11:31:44.089846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.329 [2024-10-07 11:31:44.089959] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.329 [2024-10-07 11:31:44.089990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.329 [2024-10-07 11:31:44.090026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.329 [2024-10-07 11:31:44.090060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.329 [2024-10-07 11:31:44.090092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.329 [2024-10-07 11:31:44.090111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.329 [2024-10-07 11:31:44.090125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.329 [2024-10-07 11:31:44.090155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.329 [2024-10-07 11:31:44.092609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.329 [2024-10-07 11:31:44.092718] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.329 [2024-10-07 11:31:44.092748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.330 [2024-10-07 11:31:44.092766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.330 [2024-10-07 11:31:44.092951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.330 [2024-10-07 11:31:44.093093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.330 [2024-10-07 11:31:44.093124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.330 [2024-10-07 11:31:44.093141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.330 [2024-10-07 11:31:44.093257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.330 [2024-10-07 11:31:44.099941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.330 [2024-10-07 11:31:44.100053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.330 [2024-10-07 11:31:44.100084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.330 [2024-10-07 11:31:44.100102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.330 [2024-10-07 11:31:44.100133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.330 [2024-10-07 11:31:44.100165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.330 [2024-10-07 11:31:44.100183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.330 [2024-10-07 11:31:44.100197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.330 [2024-10-07 11:31:44.100226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.330 [2024-10-07 11:31:44.103751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.330 [2024-10-07 11:31:44.103863] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.330 [2024-10-07 11:31:44.103895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.330 [2024-10-07 11:31:44.103913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.330 [2024-10-07 11:31:44.103944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.330 [2024-10-07 11:31:44.103976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.330 [2024-10-07 11:31:44.104013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.330 [2024-10-07 11:31:44.104028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.330 [2024-10-07 11:31:44.104059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.330 [2024-10-07 11:31:44.110260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.330 [2024-10-07 11:31:44.110407] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.330 [2024-10-07 11:31:44.110440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.330 [2024-10-07 11:31:44.110458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.330 [2024-10-07 11:31:44.110490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.330 [2024-10-07 11:31:44.110540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.330 [2024-10-07 11:31:44.110561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.330 [2024-10-07 11:31:44.110576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.330 [2024-10-07 11:31:44.110606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.330 [2024-10-07 11:31:44.113841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.330 [2024-10-07 11:31:44.113952] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.330 [2024-10-07 11:31:44.113982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.330 [2024-10-07 11:31:44.114000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.330 [2024-10-07 11:31:44.114032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.330 [2024-10-07 11:31:44.114064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.330 [2024-10-07 11:31:44.114083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.330 [2024-10-07 11:31:44.114097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.330 [2024-10-07 11:31:44.114126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.330 [2024-10-07 11:31:44.120378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.330 [2024-10-07 11:31:44.120490] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.330 [2024-10-07 11:31:44.120521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.330 [2024-10-07 11:31:44.120539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.330 [2024-10-07 11:31:44.120725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.330 [2024-10-07 11:31:44.120865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.330 [2024-10-07 11:31:44.120901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.330 [2024-10-07 11:31:44.120918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.330 [2024-10-07 11:31:44.121037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.330 [2024-10-07 11:31:44.124180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.330 [2024-10-07 11:31:44.124301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.330 [2024-10-07 11:31:44.124348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.330 [2024-10-07 11:31:44.124367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.330 [2024-10-07 11:31:44.124400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.330 [2024-10-07 11:31:44.124433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.330 [2024-10-07 11:31:44.124452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.330 [2024-10-07 11:31:44.124466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.330 [2024-10-07 11:31:44.124496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.330 [2024-10-07 11:31:44.131502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.330 [2024-10-07 11:31:44.131621] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.330 [2024-10-07 11:31:44.131652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.330 [2024-10-07 11:31:44.131670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.330 [2024-10-07 11:31:44.131702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.330 [2024-10-07 11:31:44.131734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.330 [2024-10-07 11:31:44.131753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.330 [2024-10-07 11:31:44.131767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.330 [2024-10-07 11:31:44.131796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.330 [2024-10-07 11:31:44.134273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.330 [2024-10-07 11:31:44.134401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.330 [2024-10-07 11:31:44.134432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.330 [2024-10-07 11:31:44.134450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.330 [2024-10-07 11:31:44.134636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.330 [2024-10-07 11:31:44.134777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.330 [2024-10-07 11:31:44.134813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.330 [2024-10-07 11:31:44.134830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.330 [2024-10-07 11:31:44.134948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.330 [2024-10-07 11:31:44.141599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.330 [2024-10-07 11:31:44.141711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.330 [2024-10-07 11:31:44.141743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.330 [2024-10-07 11:31:44.141760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.330 [2024-10-07 11:31:44.141807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.330 [2024-10-07 11:31:44.141840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.331 [2024-10-07 11:31:44.141858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.331 [2024-10-07 11:31:44.141873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.331 [2024-10-07 11:31:44.141903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.331 [2024-10-07 11:31:44.147050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.331 [2024-10-07 11:31:44.148084] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.331 [2024-10-07 11:31:44.148150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.331 [2024-10-07 11:31:44.148185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.331 [2024-10-07 11:31:44.148560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.331 [2024-10-07 11:31:44.148773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.331 [2024-10-07 11:31:44.148828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.331 [2024-10-07 11:31:44.148860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.331 [2024-10-07 11:31:44.149030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.331 [2024-10-07 11:31:44.152141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.331 [2024-10-07 11:31:44.152337] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.331 [2024-10-07 11:31:44.152385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.331 [2024-10-07 11:31:44.152418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.331 [2024-10-07 11:31:44.153713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.331 [2024-10-07 11:31:44.153957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.331 [2024-10-07 11:31:44.153985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.331 [2024-10-07 11:31:44.154001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.331 [2024-10-07 11:31:44.154043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.331 [2024-10-07 11:31:44.159151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.331 [2024-10-07 11:31:44.159406] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.331 [2024-10-07 11:31:44.159446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.331 [2024-10-07 11:31:44.159465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.331 [2024-10-07 11:31:44.159499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.331 [2024-10-07 11:31:44.159532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.331 [2024-10-07 11:31:44.159551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.331 [2024-10-07 11:31:44.159581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.331 [2024-10-07 11:31:44.159615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.331 [2024-10-07 11:31:44.162579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.331 [2024-10-07 11:31:44.162718] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.331 [2024-10-07 11:31:44.162751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.331 [2024-10-07 11:31:44.162769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.331 [2024-10-07 11:31:44.162801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.331 [2024-10-07 11:31:44.162833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.331 [2024-10-07 11:31:44.162851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.331 [2024-10-07 11:31:44.162865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.331 [2024-10-07 11:31:44.162896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.331 [2024-10-07 11:31:44.169245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.331 [2024-10-07 11:31:44.169371] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.331 [2024-10-07 11:31:44.169403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.331 [2024-10-07 11:31:44.169420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.331 [2024-10-07 11:31:44.169453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.331 [2024-10-07 11:31:44.169486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.331 [2024-10-07 11:31:44.169504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.331 [2024-10-07 11:31:44.169518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.331 [2024-10-07 11:31:44.169548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.331 [2024-10-07 11:31:44.173293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.331 [2024-10-07 11:31:44.173421] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.331 [2024-10-07 11:31:44.173452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.331 [2024-10-07 11:31:44.173470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.331 [2024-10-07 11:31:44.173502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.331 [2024-10-07 11:31:44.173535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.331 [2024-10-07 11:31:44.173553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.331 [2024-10-07 11:31:44.173567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.331 [2024-10-07 11:31:44.173598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.331 [2024-10-07 11:31:44.179851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.331 [2024-10-07 11:31:44.179992] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.331 [2024-10-07 11:31:44.180024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.331 [2024-10-07 11:31:44.180043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.331 [2024-10-07 11:31:44.180076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.331 [2024-10-07 11:31:44.180129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.331 [2024-10-07 11:31:44.180151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.331 [2024-10-07 11:31:44.180166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.331 [2024-10-07 11:31:44.180196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.331 [2024-10-07 11:31:44.183397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.331 [2024-10-07 11:31:44.183510] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.331 [2024-10-07 11:31:44.183541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.331 [2024-10-07 11:31:44.183559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.331 [2024-10-07 11:31:44.183590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.331 [2024-10-07 11:31:44.183622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.331 [2024-10-07 11:31:44.183640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.331 [2024-10-07 11:31:44.183655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.331 [2024-10-07 11:31:44.183684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.332 [2024-10-07 11:31:44.189961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.332 [2024-10-07 11:31:44.190074] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.332 [2024-10-07 11:31:44.190106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.332 [2024-10-07 11:31:44.190124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.332 [2024-10-07 11:31:44.190336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.332 [2024-10-07 11:31:44.190480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.332 [2024-10-07 11:31:44.190516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.332 [2024-10-07 11:31:44.190534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.332 [2024-10-07 11:31:44.190653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.332 [2024-10-07 11:31:44.193774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.332 [2024-10-07 11:31:44.193893] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.332 [2024-10-07 11:31:44.193924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.332 [2024-10-07 11:31:44.193942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.332 [2024-10-07 11:31:44.193991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.332 [2024-10-07 11:31:44.194025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.332 [2024-10-07 11:31:44.194044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.332 [2024-10-07 11:31:44.194058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.332 [2024-10-07 11:31:44.194088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.332 [2024-10-07 11:31:44.201123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.332 [2024-10-07 11:31:44.201236] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.332 [2024-10-07 11:31:44.201266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.332 [2024-10-07 11:31:44.201284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.332 [2024-10-07 11:31:44.201330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.332 [2024-10-07 11:31:44.201367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.332 [2024-10-07 11:31:44.201386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.332 [2024-10-07 11:31:44.201400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.332 [2024-10-07 11:31:44.201431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.332 [2024-10-07 11:31:44.203858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.332 [2024-10-07 11:31:44.203980] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.332 [2024-10-07 11:31:44.204011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.332 [2024-10-07 11:31:44.204028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.332 [2024-10-07 11:31:44.204214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.332 [2024-10-07 11:31:44.204373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.332 [2024-10-07 11:31:44.204409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.332 [2024-10-07 11:31:44.204427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.332 [2024-10-07 11:31:44.204546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.332 [2024-10-07 11:31:44.211211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.332 [2024-10-07 11:31:44.211342] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.332 [2024-10-07 11:31:44.211375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.332 [2024-10-07 11:31:44.211393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.332 [2024-10-07 11:31:44.211426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.332 [2024-10-07 11:31:44.211458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.332 [2024-10-07 11:31:44.211477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.332 [2024-10-07 11:31:44.211491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.332 [2024-10-07 11:31:44.211537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.332 [2024-10-07 11:31:44.215165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.332 [2024-10-07 11:31:44.215278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.332 [2024-10-07 11:31:44.215310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.332 [2024-10-07 11:31:44.215344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.332 [2024-10-07 11:31:44.215378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.332 [2024-10-07 11:31:44.215411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.332 [2024-10-07 11:31:44.215429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.332 [2024-10-07 11:31:44.215443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.332 [2024-10-07 11:31:44.215474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.332 [2024-10-07 11:31:44.221659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.332 [2024-10-07 11:31:44.221781] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.332 [2024-10-07 11:31:44.221813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.332 [2024-10-07 11:31:44.221831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.332 [2024-10-07 11:31:44.221864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.332 [2024-10-07 11:31:44.221897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.332 [2024-10-07 11:31:44.221915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.332 [2024-10-07 11:31:44.221929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.332 [2024-10-07 11:31:44.221958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.332 [2024-10-07 11:31:44.225257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.332 [2024-10-07 11:31:44.225383] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.332 [2024-10-07 11:31:44.225415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.332 [2024-10-07 11:31:44.225433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.332 [2024-10-07 11:31:44.225465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.332 [2024-10-07 11:31:44.225497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.332 [2024-10-07 11:31:44.225515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.332 [2024-10-07 11:31:44.225530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.332 [2024-10-07 11:31:44.226728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.332 [2024-10-07 11:31:44.231752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.332 [2024-10-07 11:31:44.231879] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.332 [2024-10-07 11:31:44.231927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.332 [2024-10-07 11:31:44.231946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.332 [2024-10-07 11:31:44.232134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.332 [2024-10-07 11:31:44.232278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.332 [2024-10-07 11:31:44.232314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.332 [2024-10-07 11:31:44.232348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.332 [2024-10-07 11:31:44.232467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.332 [2024-10-07 11:31:44.235571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.332 [2024-10-07 11:31:44.235683] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.332 [2024-10-07 11:31:44.235714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.332 [2024-10-07 11:31:44.235733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.332 [2024-10-07 11:31:44.235765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.332 [2024-10-07 11:31:44.235814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.332 [2024-10-07 11:31:44.235836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.332 [2024-10-07 11:31:44.235850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.332 [2024-10-07 11:31:44.235880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.332 [2024-10-07 11:31:44.242881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.332 [2024-10-07 11:31:44.242995] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.332 [2024-10-07 11:31:44.243026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.332 [2024-10-07 11:31:44.243043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.332 [2024-10-07 11:31:44.243075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.332 [2024-10-07 11:31:44.243107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.333 [2024-10-07 11:31:44.243125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.333 [2024-10-07 11:31:44.243140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.333 [2024-10-07 11:31:44.243170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.333 [2024-10-07 11:31:44.245659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.333 [2024-10-07 11:31:44.245932] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.333 [2024-10-07 11:31:44.245975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.333 [2024-10-07 11:31:44.245994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.333 [2024-10-07 11:31:44.246125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.333 [2024-10-07 11:31:44.246266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.333 [2024-10-07 11:31:44.246311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.333 [2024-10-07 11:31:44.246343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.333 [2024-10-07 11:31:44.246406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.333 [2024-10-07 11:31:44.252970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.333 [2024-10-07 11:31:44.253084] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.333 [2024-10-07 11:31:44.253116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.333 [2024-10-07 11:31:44.253133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.333 [2024-10-07 11:31:44.253165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.333 [2024-10-07 11:31:44.253213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.333 [2024-10-07 11:31:44.253235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.333 [2024-10-07 11:31:44.253250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.333 [2024-10-07 11:31:44.254456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.333 [2024-10-07 11:31:44.256799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.333 [2024-10-07 11:31:44.256908] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.333 [2024-10-07 11:31:44.256939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.333 [2024-10-07 11:31:44.256957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.333 [2024-10-07 11:31:44.256988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.333 [2024-10-07 11:31:44.257020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.333 [2024-10-07 11:31:44.257038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.333 [2024-10-07 11:31:44.257053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.333 [2024-10-07 11:31:44.257083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.333 [2024-10-07 11:31:44.263226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.333 [2024-10-07 11:31:44.263353] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.333 [2024-10-07 11:31:44.263386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.333 [2024-10-07 11:31:44.263404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.333 [2024-10-07 11:31:44.263437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.333 [2024-10-07 11:31:44.263470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.333 [2024-10-07 11:31:44.263488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.333 [2024-10-07 11:31:44.263502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.333 [2024-10-07 11:31:44.263533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.333 [2024-10-07 11:31:44.266891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.333 [2024-10-07 11:31:44.267001] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.333 [2024-10-07 11:31:44.267032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.333 [2024-10-07 11:31:44.267050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.333 [2024-10-07 11:31:44.267081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.333 [2024-10-07 11:31:44.268271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.333 [2024-10-07 11:31:44.268310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.333 [2024-10-07 11:31:44.268342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.333 [2024-10-07 11:31:44.268551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.333 [2024-10-07 11:31:44.273329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.333 [2024-10-07 11:31:44.273595] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.333 [2024-10-07 11:31:44.273634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.333 [2024-10-07 11:31:44.273653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.333 [2024-10-07 11:31:44.273785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.333 [2024-10-07 11:31:44.273916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.333 [2024-10-07 11:31:44.273951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.333 [2024-10-07 11:31:44.273969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.333 [2024-10-07 11:31:44.274029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.333 [2024-10-07 11:31:44.277065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.333 [2024-10-07 11:31:44.277176] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.333 [2024-10-07 11:31:44.277206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.333 [2024-10-07 11:31:44.277224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.333 [2024-10-07 11:31:44.277256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.333 [2024-10-07 11:31:44.277288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.333 [2024-10-07 11:31:44.277306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.333 [2024-10-07 11:31:44.277334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.333 [2024-10-07 11:31:44.277367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.333 [2024-10-07 11:31:44.284369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.333 [2024-10-07 11:31:44.284483] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.333 [2024-10-07 11:31:44.284515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.333 [2024-10-07 11:31:44.284547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.333 [2024-10-07 11:31:44.284582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.333 [2024-10-07 11:31:44.284614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.333 [2024-10-07 11:31:44.284633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.333 [2024-10-07 11:31:44.284647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.333 [2024-10-07 11:31:44.284677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.333 [2024-10-07 11:31:44.287153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.333 [2024-10-07 11:31:44.287436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.333 [2024-10-07 11:31:44.287480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.333 [2024-10-07 11:31:44.287500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.333 [2024-10-07 11:31:44.287633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.333 [2024-10-07 11:31:44.287762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.333 [2024-10-07 11:31:44.287797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.333 [2024-10-07 11:31:44.287815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.333 [2024-10-07 11:31:44.287875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.333 [2024-10-07 11:31:44.294465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.333 [2024-10-07 11:31:44.294578] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.333 [2024-10-07 11:31:44.294610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.333 [2024-10-07 11:31:44.294628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.333 [2024-10-07 11:31:44.294660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.333 [2024-10-07 11:31:44.294691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.333 [2024-10-07 11:31:44.294710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.333 [2024-10-07 11:31:44.294724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.333 [2024-10-07 11:31:44.295912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.333 [2024-10-07 11:31:44.298268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.333 [2024-10-07 11:31:44.298400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.333 [2024-10-07 11:31:44.298433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.334 [2024-10-07 11:31:44.298450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.334 [2024-10-07 11:31:44.298483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.334 [2024-10-07 11:31:44.298515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.334 [2024-10-07 11:31:44.298533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.334 [2024-10-07 11:31:44.298561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.334 [2024-10-07 11:31:44.298594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.334 [2024-10-07 11:31:44.304707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.334 [2024-10-07 11:31:44.304821] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.334 [2024-10-07 11:31:44.304852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.334 [2024-10-07 11:31:44.304870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.334 [2024-10-07 11:31:44.304902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.334 [2024-10-07 11:31:44.304934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.334 [2024-10-07 11:31:44.304952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.334 [2024-10-07 11:31:44.304966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.334 [2024-10-07 11:31:44.304996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.334 [2024-10-07 11:31:44.308376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.334 [2024-10-07 11:31:44.308486] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.334 [2024-10-07 11:31:44.308517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.334 [2024-10-07 11:31:44.308535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.334 [2024-10-07 11:31:44.309733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.334 [2024-10-07 11:31:44.309964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.334 [2024-10-07 11:31:44.310004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.334 [2024-10-07 11:31:44.310023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.334 [2024-10-07 11:31:44.310852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.334 [2024-10-07 11:31:44.314801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.334 [2024-10-07 11:31:44.314912] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.334 [2024-10-07 11:31:44.314944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.334 [2024-10-07 11:31:44.314961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.334 [2024-10-07 11:31:44.315148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.334 [2024-10-07 11:31:44.315289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.334 [2024-10-07 11:31:44.315337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.334 [2024-10-07 11:31:44.315358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.334 [2024-10-07 11:31:44.315477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.334 [2024-10-07 11:31:44.318558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.334 [2024-10-07 11:31:44.318686] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.334 [2024-10-07 11:31:44.318717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.334 [2024-10-07 11:31:44.318735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.334 [2024-10-07 11:31:44.318767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.334 [2024-10-07 11:31:44.318799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.334 [2024-10-07 11:31:44.318817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.334 [2024-10-07 11:31:44.318831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.334 [2024-10-07 11:31:44.318861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.334 [2024-10-07 11:31:44.325878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.334 [2024-10-07 11:31:44.325990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.334 [2024-10-07 11:31:44.326021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.334 [2024-10-07 11:31:44.326039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.334 [2024-10-07 11:31:44.326071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.334 [2024-10-07 11:31:44.326103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.334 [2024-10-07 11:31:44.326121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.334 [2024-10-07 11:31:44.326135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.334 [2024-10-07 11:31:44.326165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.334 [2024-10-07 11:31:44.328665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.334 [2024-10-07 11:31:44.328773] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.334 [2024-10-07 11:31:44.328804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.334 [2024-10-07 11:31:44.328821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.334 [2024-10-07 11:31:44.329007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.334 [2024-10-07 11:31:44.329147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.334 [2024-10-07 11:31:44.329183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.334 [2024-10-07 11:31:44.329200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.334 [2024-10-07 11:31:44.329331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.334 [2024-10-07 11:31:44.335971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.334 [2024-10-07 11:31:44.336083] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.334 [2024-10-07 11:31:44.336114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.334 [2024-10-07 11:31:44.336132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.334 [2024-10-07 11:31:44.336178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.334 [2024-10-07 11:31:44.336211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.334 [2024-10-07 11:31:44.336229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.334 [2024-10-07 11:31:44.336243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.334 [2024-10-07 11:31:44.336273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.334 [2024-10-07 11:31:44.339829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.334 [2024-10-07 11:31:44.339943] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.334 [2024-10-07 11:31:44.339974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.334 [2024-10-07 11:31:44.339992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.334 [2024-10-07 11:31:44.340024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.334 [2024-10-07 11:31:44.340056] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.334 [2024-10-07 11:31:44.340080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.334 [2024-10-07 11:31:44.340094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.334 [2024-10-07 11:31:44.340124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.334 [2024-10-07 11:31:44.346277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.334 [2024-10-07 11:31:44.346432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.334 [2024-10-07 11:31:44.346465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.334 [2024-10-07 11:31:44.346483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.334 [2024-10-07 11:31:44.346515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.334 [2024-10-07 11:31:44.346564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.334 [2024-10-07 11:31:44.346586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.334 [2024-10-07 11:31:44.346601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.334 [2024-10-07 11:31:44.346632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.335 [2024-10-07 11:31:44.349917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.335 [2024-10-07 11:31:44.350026] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.335 [2024-10-07 11:31:44.350056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.335 [2024-10-07 11:31:44.350074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.335 [2024-10-07 11:31:44.350105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.335 [2024-10-07 11:31:44.350137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.335 [2024-10-07 11:31:44.350155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.335 [2024-10-07 11:31:44.350190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.335 [2024-10-07 11:31:44.351414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.335 [2024-10-07 11:31:44.356402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.335 [2024-10-07 11:31:44.356514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.335 [2024-10-07 11:31:44.356545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.335 [2024-10-07 11:31:44.356563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.335 [2024-10-07 11:31:44.356762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.335 [2024-10-07 11:31:44.356902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.335 [2024-10-07 11:31:44.356930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.335 [2024-10-07 11:31:44.356945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.335 [2024-10-07 11:31:44.357062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.335 [2024-10-07 11:31:44.360270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.335 [2024-10-07 11:31:44.360403] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.335 [2024-10-07 11:31:44.360435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.335 [2024-10-07 11:31:44.360452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.335 [2024-10-07 11:31:44.360485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.335 [2024-10-07 11:31:44.360517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.335 [2024-10-07 11:31:44.360535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.335 [2024-10-07 11:31:44.360549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.335 [2024-10-07 11:31:44.360580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.335 [2024-10-07 11:31:44.367593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.335 [2024-10-07 11:31:44.367706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.335 [2024-10-07 11:31:44.367737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.335 [2024-10-07 11:31:44.367754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.335 [2024-10-07 11:31:44.367786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.335 [2024-10-07 11:31:44.367818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.335 [2024-10-07 11:31:44.367836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.335 [2024-10-07 11:31:44.367851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.335 [2024-10-07 11:31:44.367880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.335 [2024-10-07 11:31:44.370376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.335 [2024-10-07 11:31:44.370485] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.335 [2024-10-07 11:31:44.370529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.335 [2024-10-07 11:31:44.370548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.335 [2024-10-07 11:31:44.370736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.335 [2024-10-07 11:31:44.370878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.335 [2024-10-07 11:31:44.370904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.335 [2024-10-07 11:31:44.370919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.335 [2024-10-07 11:31:44.371035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.335 [2024-10-07 11:31:44.377686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.335 [2024-10-07 11:31:44.377799] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.335 [2024-10-07 11:31:44.377831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.335 [2024-10-07 11:31:44.377848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.335 [2024-10-07 11:31:44.377880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.335 [2024-10-07 11:31:44.377912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.335 [2024-10-07 11:31:44.377930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.335 [2024-10-07 11:31:44.377945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.335 [2024-10-07 11:31:44.377976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.335 [2024-10-07 11:31:44.381738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.335 [2024-10-07 11:31:44.381860] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.335 [2024-10-07 11:31:44.381891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.335 [2024-10-07 11:31:44.381908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.335 [2024-10-07 11:31:44.381941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.335 [2024-10-07 11:31:44.381973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.335 [2024-10-07 11:31:44.381991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.335 [2024-10-07 11:31:44.382005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.335 [2024-10-07 11:31:44.382035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.335 [2024-10-07 11:31:44.388405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.335 [2024-10-07 11:31:44.388527] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.335 [2024-10-07 11:31:44.388558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.335 [2024-10-07 11:31:44.388576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.335 [2024-10-07 11:31:44.388608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.335 [2024-10-07 11:31:44.388661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.335 [2024-10-07 11:31:44.388682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.335 [2024-10-07 11:31:44.388697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.335 [2024-10-07 11:31:44.388727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.335 [2024-10-07 11:31:44.391838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.335 [2024-10-07 11:31:44.391951] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.335 [2024-10-07 11:31:44.391982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.335 [2024-10-07 11:31:44.392000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.335 [2024-10-07 11:31:44.392031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.335 [2024-10-07 11:31:44.392064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.335 [2024-10-07 11:31:44.392082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.335 [2024-10-07 11:31:44.392096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.335 [2024-10-07 11:31:44.392126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.335 [2024-10-07 11:31:44.398493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.335 [2024-10-07 11:31:44.398615] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.335 [2024-10-07 11:31:44.398646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.335 [2024-10-07 11:31:44.398664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.335 [2024-10-07 11:31:44.398710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.335 [2024-10-07 11:31:44.398745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.335 [2024-10-07 11:31:44.398763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.335 [2024-10-07 11:31:44.398777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.335 [2024-10-07 11:31:44.398807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.335 [2024-10-07 11:31:44.402607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.335 [2024-10-07 11:31:44.402744] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.335 [2024-10-07 11:31:44.402776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.335 [2024-10-07 11:31:44.402794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.335 [2024-10-07 11:31:44.402827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.335 [2024-10-07 11:31:44.402872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.336 [2024-10-07 11:31:44.402892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.336 [2024-10-07 11:31:44.402906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.336 [2024-10-07 11:31:44.402937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.336 [2024-10-07 11:31:44.410023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.336 [2024-10-07 11:31:44.410147] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.336 [2024-10-07 11:31:44.410178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.336 [2024-10-07 11:31:44.410196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.336 [2024-10-07 11:31:44.410228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.336 [2024-10-07 11:31:44.410272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.336 [2024-10-07 11:31:44.410306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.336 [2024-10-07 11:31:44.410348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.336 [2024-10-07 11:31:44.410383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.336 [2024-10-07 11:31:44.412701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.336 [2024-10-07 11:31:44.412810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.336 [2024-10-07 11:31:44.412840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.336 [2024-10-07 11:31:44.412857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.336 [2024-10-07 11:31:44.412890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.336 [2024-10-07 11:31:44.412922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.336 [2024-10-07 11:31:44.412940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.336 [2024-10-07 11:31:44.412954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.336 [2024-10-07 11:31:44.412984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.336 [2024-10-07 11:31:44.420123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.336 [2024-10-07 11:31:44.420237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.336 [2024-10-07 11:31:44.420268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.336 [2024-10-07 11:31:44.420286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.336 [2024-10-07 11:31:44.420334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.336 [2024-10-07 11:31:44.420381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.336 [2024-10-07 11:31:44.420402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.336 [2024-10-07 11:31:44.420417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.336 [2024-10-07 11:31:44.420447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.336 [2024-10-07 11:31:44.424125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.336 [2024-10-07 11:31:44.424277] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.336 [2024-10-07 11:31:44.424309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.336 [2024-10-07 11:31:44.424362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.336 [2024-10-07 11:31:44.424398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.336 [2024-10-07 11:31:44.424432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.336 [2024-10-07 11:31:44.424450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.336 [2024-10-07 11:31:44.424465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.336 [2024-10-07 11:31:44.424496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.336 [2024-10-07 11:31:44.430870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.336 [2024-10-07 11:31:44.430991] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.336 [2024-10-07 11:31:44.431024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.336 [2024-10-07 11:31:44.431041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.336 [2024-10-07 11:31:44.431074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.336 [2024-10-07 11:31:44.431106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.336 [2024-10-07 11:31:44.431124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.336 [2024-10-07 11:31:44.431138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.336 [2024-10-07 11:31:44.431168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.336 [2024-10-07 11:31:44.434216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.336 [2024-10-07 11:31:44.434348] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.336 [2024-10-07 11:31:44.434380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.336 [2024-10-07 11:31:44.434398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.336 [2024-10-07 11:31:44.434431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.336 [2024-10-07 11:31:44.434463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.336 [2024-10-07 11:31:44.434481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.336 [2024-10-07 11:31:44.434495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.336 [2024-10-07 11:31:44.434525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.336 [2024-10-07 11:31:44.440961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.336 [2024-10-07 11:31:44.441077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.336 [2024-10-07 11:31:44.441108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.336 [2024-10-07 11:31:44.441126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.336 [2024-10-07 11:31:44.441157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.336 [2024-10-07 11:31:44.441190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.336 [2024-10-07 11:31:44.441228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.336 [2024-10-07 11:31:44.441244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.336 [2024-10-07 11:31:44.441446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.336 [2024-10-07 11:31:44.444909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.336 [2024-10-07 11:31:44.445029] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.336 [2024-10-07 11:31:44.445060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.336 [2024-10-07 11:31:44.445078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.336 [2024-10-07 11:31:44.445110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.336 [2024-10-07 11:31:44.445142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.336 [2024-10-07 11:31:44.445163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.336 [2024-10-07 11:31:44.445177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.336 [2024-10-07 11:31:44.445207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.336 [2024-10-07 11:31:44.452360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.336 [2024-10-07 11:31:44.452473] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.336 [2024-10-07 11:31:44.452504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.336 [2024-10-07 11:31:44.452522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.336 [2024-10-07 11:31:44.452554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.336 [2024-10-07 11:31:44.452586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.336 [2024-10-07 11:31:44.452604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.336 [2024-10-07 11:31:44.452618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.336 [2024-10-07 11:31:44.452648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.336 [2024-10-07 11:31:44.454997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.336 [2024-10-07 11:31:44.455114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.336 [2024-10-07 11:31:44.455144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.336 [2024-10-07 11:31:44.455162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.336 [2024-10-07 11:31:44.455208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.336 [2024-10-07 11:31:44.455242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.336 [2024-10-07 11:31:44.455260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.336 [2024-10-07 11:31:44.455275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.336 [2024-10-07 11:31:44.455304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.336 [2024-10-07 11:31:44.462452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.336 [2024-10-07 11:31:44.462586] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.337 [2024-10-07 11:31:44.462617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.337 [2024-10-07 11:31:44.462635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.337 [2024-10-07 11:31:44.462670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.337 [2024-10-07 11:31:44.462702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.337 [2024-10-07 11:31:44.462721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.337 [2024-10-07 11:31:44.462735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.337 [2024-10-07 11:31:44.462764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.337 [2024-10-07 11:31:44.466461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.337 [2024-10-07 11:31:44.466575] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.337 [2024-10-07 11:31:44.466606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.337 [2024-10-07 11:31:44.466623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.337 [2024-10-07 11:31:44.466663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.337 [2024-10-07 11:31:44.466696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.337 [2024-10-07 11:31:44.466716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.337 [2024-10-07 11:31:44.466729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.337 [2024-10-07 11:31:44.466759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.337 [2024-10-07 11:31:44.473099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.337 [2024-10-07 11:31:44.473228] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.337 [2024-10-07 11:31:44.473259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.337 [2024-10-07 11:31:44.473277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.337 [2024-10-07 11:31:44.473309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.337 [2024-10-07 11:31:44.473359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.337 [2024-10-07 11:31:44.473378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.337 [2024-10-07 11:31:44.473393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.337 [2024-10-07 11:31:44.473424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.337 [2024-10-07 11:31:44.476555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.337 [2024-10-07 11:31:44.476664] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.337 [2024-10-07 11:31:44.476695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.337 [2024-10-07 11:31:44.476712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.337 [2024-10-07 11:31:44.476752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.337 [2024-10-07 11:31:44.476793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.337 [2024-10-07 11:31:44.476812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.337 [2024-10-07 11:31:44.476826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.337 [2024-10-07 11:31:44.476856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.337 [2024-10-07 11:31:44.483188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.337 [2024-10-07 11:31:44.483301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.337 [2024-10-07 11:31:44.483347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.337 [2024-10-07 11:31:44.483366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.337 [2024-10-07 11:31:44.483398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.337 [2024-10-07 11:31:44.483430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.337 [2024-10-07 11:31:44.483448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.337 [2024-10-07 11:31:44.483463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.337 [2024-10-07 11:31:44.483492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.337 [2024-10-07 11:31:44.487198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.337 [2024-10-07 11:31:44.487334] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.337 [2024-10-07 11:31:44.487367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.337 [2024-10-07 11:31:44.487385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.337 [2024-10-07 11:31:44.487418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.337 [2024-10-07 11:31:44.487450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.337 [2024-10-07 11:31:44.487468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.337 [2024-10-07 11:31:44.487482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.337 [2024-10-07 11:31:44.487512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.337 [2024-10-07 11:31:44.494610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.337 [2024-10-07 11:31:44.494728] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.337 [2024-10-07 11:31:44.494759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.337 [2024-10-07 11:31:44.494777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.337 [2024-10-07 11:31:44.494810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.337 [2024-10-07 11:31:44.494846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.337 [2024-10-07 11:31:44.494864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.337 [2024-10-07 11:31:44.494895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.337 [2024-10-07 11:31:44.494928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.337 [2024-10-07 11:31:44.497292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.337 [2024-10-07 11:31:44.497413] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.337 [2024-10-07 11:31:44.497445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.337 [2024-10-07 11:31:44.497463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.337 [2024-10-07 11:31:44.497494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.337 8778.25 IOPS, 34.29 MiB/s [2024-10-07T11:31:52.860Z] [2024-10-07 11:31:44.499299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.337 [2024-10-07 11:31:44.499344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.337 [2024-10-07 11:31:44.499364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.337 [2024-10-07 11:31:44.499524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.337 [2024-10-07 11:31:44.504707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.337 [2024-10-07 11:31:44.504819] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.337 [2024-10-07 11:31:44.504850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.337 [2024-10-07 11:31:44.504868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.337 [2024-10-07 11:31:44.504900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.337 [2024-10-07 11:31:44.504941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.337 [2024-10-07 11:31:44.504959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.337 [2024-10-07 11:31:44.504974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.337 [2024-10-07 11:31:44.505003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.337 [2024-10-07 11:31:44.508708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.337 [2024-10-07 11:31:44.508858] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.337 [2024-10-07 11:31:44.508895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.337 [2024-10-07 11:31:44.508913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.337 [2024-10-07 11:31:44.508946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.337 [2024-10-07 11:31:44.508979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.337 [2024-10-07 11:31:44.508997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.337 [2024-10-07 11:31:44.509011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.337 [2024-10-07 11:31:44.509042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.337 [2024-10-07 11:31:44.515474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.337 [2024-10-07 11:31:44.515682] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.337 [2024-10-07 11:31:44.515753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.337 [2024-10-07 11:31:44.515775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.337 [2024-10-07 11:31:44.515817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.337 [2024-10-07 11:31:44.515856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.337 [2024-10-07 11:31:44.515874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.337 [2024-10-07 11:31:44.515889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.338 [2024-10-07 11:31:44.515920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.338 [2024-10-07 11:31:44.518912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.338 [2024-10-07 11:31:44.519023] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.338 [2024-10-07 11:31:44.519054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.338 [2024-10-07 11:31:44.519072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.338 [2024-10-07 11:31:44.519104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.338 [2024-10-07 11:31:44.519136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.338 [2024-10-07 11:31:44.519154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.338 [2024-10-07 11:31:44.519168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.338 [2024-10-07 11:31:44.519198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.338 [2024-10-07 11:31:44.525569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.338 [2024-10-07 11:31:44.525686] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.338 [2024-10-07 11:31:44.525719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.338 [2024-10-07 11:31:44.525737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.338 [2024-10-07 11:31:44.525769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.338 [2024-10-07 11:31:44.525801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.338 [2024-10-07 11:31:44.525819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.338 [2024-10-07 11:31:44.525834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.338 [2024-10-07 11:31:44.525864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.338 [2024-10-07 11:31:44.529747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.338 [2024-10-07 11:31:44.529870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.338 [2024-10-07 11:31:44.529901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.338 [2024-10-07 11:31:44.529919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.338 [2024-10-07 11:31:44.529951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.338 [2024-10-07 11:31:44.530002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.338 [2024-10-07 11:31:44.530021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.338 [2024-10-07 11:31:44.530035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.338 [2024-10-07 11:31:44.530066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.338 [2024-10-07 11:31:44.537191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.338 [2024-10-07 11:31:44.537375] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.338 [2024-10-07 11:31:44.537418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.338 [2024-10-07 11:31:44.537438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.338 [2024-10-07 11:31:44.537472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.338 [2024-10-07 11:31:44.537505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.338 [2024-10-07 11:31:44.537524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.338 [2024-10-07 11:31:44.537545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.338 [2024-10-07 11:31:44.537597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.338 [2024-10-07 11:31:44.539839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.338 [2024-10-07 11:31:44.539953] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.338 [2024-10-07 11:31:44.539984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.338 [2024-10-07 11:31:44.540002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.338 [2024-10-07 11:31:44.540034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.338 [2024-10-07 11:31:44.540066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.338 [2024-10-07 11:31:44.540083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.338 [2024-10-07 11:31:44.540098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.338 [2024-10-07 11:31:44.540128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.338 [2024-10-07 11:31:44.547389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.338 [2024-10-07 11:31:44.547504] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.338 [2024-10-07 11:31:44.547536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.338 [2024-10-07 11:31:44.547553] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.338 [2024-10-07 11:31:44.547592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.338 [2024-10-07 11:31:44.547625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.338 [2024-10-07 11:31:44.547643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.338 [2024-10-07 11:31:44.547657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.338 [2024-10-07 11:31:44.547704] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.338 [2024-10-07 11:31:44.551525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.338 [2024-10-07 11:31:44.551641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.338 [2024-10-07 11:31:44.551673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.338 [2024-10-07 11:31:44.551690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.338 [2024-10-07 11:31:44.551722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.338 [2024-10-07 11:31:44.551755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.338 [2024-10-07 11:31:44.551773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.338 [2024-10-07 11:31:44.551787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.338 [2024-10-07 11:31:44.551817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.338 [2024-10-07 11:31:44.558188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.338 [2024-10-07 11:31:44.558355] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.338 [2024-10-07 11:31:44.558389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.338 [2024-10-07 11:31:44.558417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.338 [2024-10-07 11:31:44.558452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.338 [2024-10-07 11:31:44.558484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.338 [2024-10-07 11:31:44.558503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.338 [2024-10-07 11:31:44.558517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.338 [2024-10-07 11:31:44.558547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.338 [2024-10-07 11:31:44.561616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.338 [2024-10-07 11:31:44.561725] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.338 [2024-10-07 11:31:44.561756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.338 [2024-10-07 11:31:44.561773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.338 [2024-10-07 11:31:44.561804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.338 [2024-10-07 11:31:44.561848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.338 [2024-10-07 11:31:44.561866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.338 [2024-10-07 11:31:44.561881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.338 [2024-10-07 11:31:44.561910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.338 [2024-10-07 11:31:44.568278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.338 [2024-10-07 11:31:44.568404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.338 [2024-10-07 11:31:44.568436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.338 [2024-10-07 11:31:44.568472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.338 [2024-10-07 11:31:44.568506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.338 [2024-10-07 11:31:44.568697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.338 [2024-10-07 11:31:44.568735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.338 [2024-10-07 11:31:44.568753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.338 [2024-10-07 11:31:44.568887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.338 [2024-10-07 11:31:44.572257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.338 [2024-10-07 11:31:44.572391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.338 [2024-10-07 11:31:44.572423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.338 [2024-10-07 11:31:44.572441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.338 [2024-10-07 11:31:44.572473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.338 [2024-10-07 11:31:44.572506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.339 [2024-10-07 11:31:44.572524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.339 [2024-10-07 11:31:44.572538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.339 [2024-10-07 11:31:44.572568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.339 [2024-10-07 11:31:44.579627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.339 [2024-10-07 11:31:44.579743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.339 [2024-10-07 11:31:44.579774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.339 [2024-10-07 11:31:44.579791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.339 [2024-10-07 11:31:44.579823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.339 [2024-10-07 11:31:44.579856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.339 [2024-10-07 11:31:44.579874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.339 [2024-10-07 11:31:44.579888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.339 [2024-10-07 11:31:44.579919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.339 [2024-10-07 11:31:44.582360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.339 [2024-10-07 11:31:44.582469] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.339 [2024-10-07 11:31:44.582501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.339 [2024-10-07 11:31:44.582518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.339 [2024-10-07 11:31:44.582714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.339 [2024-10-07 11:31:44.582868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.339 [2024-10-07 11:31:44.582916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.339 [2024-10-07 11:31:44.582935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.339 [2024-10-07 11:31:44.583055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.339 [2024-10-07 11:31:44.589722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.339 [2024-10-07 11:31:44.589840] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.339 [2024-10-07 11:31:44.589872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.339 [2024-10-07 11:31:44.589889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.339 [2024-10-07 11:31:44.589921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.339 [2024-10-07 11:31:44.589953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.339 [2024-10-07 11:31:44.589972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.339 [2024-10-07 11:31:44.589986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.339 [2024-10-07 11:31:44.590016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.339 [2024-10-07 11:31:44.593635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.339 [2024-10-07 11:31:44.593750] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.339 [2024-10-07 11:31:44.593782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.339 [2024-10-07 11:31:44.593799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.339 [2024-10-07 11:31:44.593843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.339 [2024-10-07 11:31:44.593876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.339 [2024-10-07 11:31:44.593894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.339 [2024-10-07 11:31:44.593908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.339 [2024-10-07 11:31:44.593937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.339 [2024-10-07 11:31:44.600269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.339 [2024-10-07 11:31:44.600416] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.339 [2024-10-07 11:31:44.600449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.339 [2024-10-07 11:31:44.600467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.339 [2024-10-07 11:31:44.600500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.339 [2024-10-07 11:31:44.600533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.339 [2024-10-07 11:31:44.600551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.339 [2024-10-07 11:31:44.600566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.339 [2024-10-07 11:31:44.600596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.339 [2024-10-07 11:31:44.603727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.339 [2024-10-07 11:31:44.603860] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.339 [2024-10-07 11:31:44.603892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.339 [2024-10-07 11:31:44.603910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.339 [2024-10-07 11:31:44.603942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.339 [2024-10-07 11:31:44.603975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.339 [2024-10-07 11:31:44.603993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.339 [2024-10-07 11:31:44.604007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.339 [2024-10-07 11:31:44.604037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.339 [2024-10-07 11:31:44.610386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.339 [2024-10-07 11:31:44.610499] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.339 [2024-10-07 11:31:44.610531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.339 [2024-10-07 11:31:44.610549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.339 [2024-10-07 11:31:44.610597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.339 [2024-10-07 11:31:44.610633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.339 [2024-10-07 11:31:44.610651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.339 [2024-10-07 11:31:44.610666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.339 [2024-10-07 11:31:44.610849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.339 [2024-10-07 11:31:44.614367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.339 [2024-10-07 11:31:44.614489] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.339 [2024-10-07 11:31:44.614520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.339 [2024-10-07 11:31:44.614538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.339 [2024-10-07 11:31:44.614571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.339 [2024-10-07 11:31:44.614620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.339 [2024-10-07 11:31:44.614642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.339 [2024-10-07 11:31:44.614656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.339 [2024-10-07 11:31:44.614687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.339 [2024-10-07 11:31:44.621746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.339 [2024-10-07 11:31:44.621862] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.339 [2024-10-07 11:31:44.621894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.339 [2024-10-07 11:31:44.621911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.339 [2024-10-07 11:31:44.621964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.339 [2024-10-07 11:31:44.621998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.339 [2024-10-07 11:31:44.622017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.339 [2024-10-07 11:31:44.622032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.339 [2024-10-07 11:31:44.622062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.339 [2024-10-07 11:31:44.624457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.339 [2024-10-07 11:31:44.624567] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.339 [2024-10-07 11:31:44.624598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.340 [2024-10-07 11:31:44.624615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.340 [2024-10-07 11:31:44.624648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.340 [2024-10-07 11:31:44.624680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.340 [2024-10-07 11:31:44.624698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.340 [2024-10-07 11:31:44.624712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.340 [2024-10-07 11:31:44.624905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.340 [2024-10-07 11:31:44.631840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.340 [2024-10-07 11:31:44.631953] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.340 [2024-10-07 11:31:44.631985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.340 [2024-10-07 11:31:44.632002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.340 [2024-10-07 11:31:44.632034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.340 [2024-10-07 11:31:44.632066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.340 [2024-10-07 11:31:44.632085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.340 [2024-10-07 11:31:44.632099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.340 [2024-10-07 11:31:44.632129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.340 [2024-10-07 11:31:44.635889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.340 [2024-10-07 11:31:44.636006] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.340 [2024-10-07 11:31:44.636037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.340 [2024-10-07 11:31:44.636054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.340 [2024-10-07 11:31:44.636086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.340 [2024-10-07 11:31:44.636118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.340 [2024-10-07 11:31:44.636136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.340 [2024-10-07 11:31:44.636169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.340 [2024-10-07 11:31:44.636202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.340 [2024-10-07 11:31:44.642588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.340 [2024-10-07 11:31:44.642711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.340 [2024-10-07 11:31:44.642744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.340 [2024-10-07 11:31:44.642762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.340 [2024-10-07 11:31:44.642794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.340 [2024-10-07 11:31:44.642827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.340 [2024-10-07 11:31:44.642846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.340 [2024-10-07 11:31:44.642860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.340 [2024-10-07 11:31:44.642890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.340 [2024-10-07 11:31:44.645978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.340 [2024-10-07 11:31:44.646090] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.340 [2024-10-07 11:31:44.646121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.340 [2024-10-07 11:31:44.646138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.340 [2024-10-07 11:31:44.646170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.340 [2024-10-07 11:31:44.646202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.340 [2024-10-07 11:31:44.646220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.340 [2024-10-07 11:31:44.646234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.340 [2024-10-07 11:31:44.646264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.340 [2024-10-07 11:31:44.652681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.340 [2024-10-07 11:31:44.652794] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.340 [2024-10-07 11:31:44.652825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.340 [2024-10-07 11:31:44.652843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.340 [2024-10-07 11:31:44.652874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.340 [2024-10-07 11:31:44.652918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.340 [2024-10-07 11:31:44.652954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.340 [2024-10-07 11:31:44.652969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.340 [2024-10-07 11:31:44.653002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.340 [2024-10-07 11:31:44.656792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.340 [2024-10-07 11:31:44.656913] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.340 [2024-10-07 11:31:44.656965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.340 [2024-10-07 11:31:44.656984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.340 [2024-10-07 11:31:44.657017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.340 [2024-10-07 11:31:44.657049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.340 [2024-10-07 11:31:44.657067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.340 [2024-10-07 11:31:44.657081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.340 [2024-10-07 11:31:44.657112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.340 [2024-10-07 11:31:44.664200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.340 [2024-10-07 11:31:44.664330] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.340 [2024-10-07 11:31:44.664362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.340 [2024-10-07 11:31:44.664379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.340 [2024-10-07 11:31:44.664413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.340 [2024-10-07 11:31:44.664445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.340 [2024-10-07 11:31:44.664463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.340 [2024-10-07 11:31:44.664477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.340 [2024-10-07 11:31:44.664507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.340 [2024-10-07 11:31:44.666885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.340 [2024-10-07 11:31:44.667010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.340 [2024-10-07 11:31:44.667040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.340 [2024-10-07 11:31:44.667058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.340 [2024-10-07 11:31:44.667090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.340 [2024-10-07 11:31:44.667122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.340 [2024-10-07 11:31:44.667139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.340 [2024-10-07 11:31:44.667153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.340 [2024-10-07 11:31:44.667374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.340 [2024-10-07 11:31:44.674329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.340 [2024-10-07 11:31:44.674436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.340 [2024-10-07 11:31:44.674468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.340 [2024-10-07 11:31:44.674485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.340 [2024-10-07 11:31:44.674516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.340 [2024-10-07 11:31:44.674566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.340 [2024-10-07 11:31:44.674587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.340 [2024-10-07 11:31:44.674601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.340 [2024-10-07 11:31:44.674631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.340 [2024-10-07 11:31:44.678384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.340 [2024-10-07 11:31:44.684433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.340 [2024-10-07 11:31:44.685121] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.340 [2024-10-07 11:31:44.685167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.340 [2024-10-07 11:31:44.685188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.340 [2024-10-07 11:31:44.685386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.340 [2024-10-07 11:31:44.685512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.340 [2024-10-07 11:31:44.685543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.340 [2024-10-07 11:31:44.685560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.341 [2024-10-07 11:31:44.685604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.341 [2024-10-07 11:31:44.692810] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:57.341 [2024-10-07 11:31:44.695189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.341 [2024-10-07 11:31:44.695312] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.341 [2024-10-07 11:31:44.695359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.341 [2024-10-07 11:31:44.695377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.341 [2024-10-07 11:31:44.695415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.341 [2024-10-07 11:31:44.695452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.341 [2024-10-07 11:31:44.695471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.341 [2024-10-07 11:31:44.695485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.341 [2024-10-07 11:31:44.695520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.341 [2024-10-07 11:31:44.707778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.341 [2024-10-07 11:31:44.709693] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.341 [2024-10-07 11:31:44.709746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.341 [2024-10-07 11:31:44.709769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.341 [2024-10-07 11:31:44.710149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.341 [2024-10-07 11:31:44.711960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.341 [2024-10-07 11:31:44.712033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.341 [2024-10-07 11:31:44.712053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.341 [2024-10-07 11:31:44.712227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.341 [2024-10-07 11:31:44.719162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.341 [2024-10-07 11:31:44.719382] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.341 [2024-10-07 11:31:44.719415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.341 [2024-10-07 11:31:44.719433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.341 [2024-10-07 11:31:44.719547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.341 [2024-10-07 11:31:44.719626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.341 [2024-10-07 11:31:44.719662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.341 [2024-10-07 11:31:44.719679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.341 [2024-10-07 11:31:44.719716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.341 [2024-10-07 11:31:44.729904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.341 [2024-10-07 11:31:44.730217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.341 [2024-10-07 11:31:44.730263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.341 [2024-10-07 11:31:44.730294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.341 [2024-10-07 11:31:44.730424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.341 [2024-10-07 11:31:44.730492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.341 [2024-10-07 11:31:44.730516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.341 [2024-10-07 11:31:44.730531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.341 [2024-10-07 11:31:44.730568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.341 [2024-10-07 11:31:44.740007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.341 [2024-10-07 11:31:44.740127] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.341 [2024-10-07 11:31:44.740159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.341 [2024-10-07 11:31:44.740177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.341 [2024-10-07 11:31:44.740214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.341 [2024-10-07 11:31:44.740250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.341 [2024-10-07 11:31:44.740268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.341 [2024-10-07 11:31:44.740282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.341 [2024-10-07 11:31:44.740334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.341 [2024-10-07 11:31:44.750350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.341 [2024-10-07 11:31:44.751026] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.341 [2024-10-07 11:31:44.751072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.341 [2024-10-07 11:31:44.751092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.341 [2024-10-07 11:31:44.751257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.341 [2024-10-07 11:31:44.751392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.341 [2024-10-07 11:31:44.751416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.341 [2024-10-07 11:31:44.751431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.341 [2024-10-07 11:31:44.751475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.341 [2024-10-07 11:31:44.760519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.341 [2024-10-07 11:31:44.760637] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.341 [2024-10-07 11:31:44.760669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.341 [2024-10-07 11:31:44.760686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.341 [2024-10-07 11:31:44.760723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.341 [2024-10-07 11:31:44.760759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.341 [2024-10-07 11:31:44.760778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.341 [2024-10-07 11:31:44.760792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.341 [2024-10-07 11:31:44.760826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.341 [2024-10-07 11:31:44.771581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.341 [2024-10-07 11:31:44.771705] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.341 [2024-10-07 11:31:44.771736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.341 [2024-10-07 11:31:44.771753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.341 [2024-10-07 11:31:44.771789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.341 [2024-10-07 11:31:44.771825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.341 [2024-10-07 11:31:44.771844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.341 [2024-10-07 11:31:44.771859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.341 [2024-10-07 11:31:44.771893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.341 [2024-10-07 11:31:44.782423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.341 [2024-10-07 11:31:44.782543] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.341 [2024-10-07 11:31:44.782575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.341 [2024-10-07 11:31:44.782593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.341 [2024-10-07 11:31:44.782639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.341 [2024-10-07 11:31:44.782684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.341 [2024-10-07 11:31:44.782703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.341 [2024-10-07 11:31:44.782718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.341 [2024-10-07 11:31:44.782752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.341 [2024-10-07 11:31:44.792755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.341 [2024-10-07 11:31:44.792892] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.341 [2024-10-07 11:31:44.792935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.341 [2024-10-07 11:31:44.792953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.341 [2024-10-07 11:31:44.793209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.341 [2024-10-07 11:31:44.793376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.341 [2024-10-07 11:31:44.793407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.341 [2024-10-07 11:31:44.793424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.341 [2024-10-07 11:31:44.793535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.341 [2024-10-07 11:31:44.802888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.341 [2024-10-07 11:31:44.803006] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.341 [2024-10-07 11:31:44.803038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.341 [2024-10-07 11:31:44.803056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.341 [2024-10-07 11:31:44.803092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.341 [2024-10-07 11:31:44.803127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.342 [2024-10-07 11:31:44.803145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.342 [2024-10-07 11:31:44.803160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.342 [2024-10-07 11:31:44.803194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.342 [2024-10-07 11:31:44.813655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.342 [2024-10-07 11:31:44.813777] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.342 [2024-10-07 11:31:44.813809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.342 [2024-10-07 11:31:44.813826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.342 [2024-10-07 11:31:44.813862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.342 [2024-10-07 11:31:44.813898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.342 [2024-10-07 11:31:44.813917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.342 [2024-10-07 11:31:44.813948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.342 [2024-10-07 11:31:44.813987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.342 [2024-10-07 11:31:44.824358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.342 [2024-10-07 11:31:44.824486] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.342 [2024-10-07 11:31:44.824518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.342 [2024-10-07 11:31:44.824536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.342 [2024-10-07 11:31:44.824572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.342 [2024-10-07 11:31:44.824608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.342 [2024-10-07 11:31:44.824626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.342 [2024-10-07 11:31:44.824641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.342 [2024-10-07 11:31:44.824675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.342 [2024-10-07 11:31:44.834831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.342 [2024-10-07 11:31:44.834957] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.342 [2024-10-07 11:31:44.834988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.342 [2024-10-07 11:31:44.835006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.342 [2024-10-07 11:31:44.835261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.342 [2024-10-07 11:31:44.835435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.342 [2024-10-07 11:31:44.835471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.342 [2024-10-07 11:31:44.835489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.342 [2024-10-07 11:31:44.835602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.342 [2024-10-07 11:31:44.844934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.342 [2024-10-07 11:31:44.845050] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.342 [2024-10-07 11:31:44.845083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.342 [2024-10-07 11:31:44.845101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.342 [2024-10-07 11:31:44.845137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.342 [2024-10-07 11:31:44.845182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.342 [2024-10-07 11:31:44.845200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.342 [2024-10-07 11:31:44.845214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.342 [2024-10-07 11:31:44.845249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.342 [2024-10-07 11:31:44.855792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.342 [2024-10-07 11:31:44.855915] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.342 [2024-10-07 11:31:44.855964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.342 [2024-10-07 11:31:44.855984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.342 [2024-10-07 11:31:44.856021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.342 [2024-10-07 11:31:44.856057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.342 [2024-10-07 11:31:44.856076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.342 [2024-10-07 11:31:44.856090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.342 [2024-10-07 11:31:44.856125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.342 [2024-10-07 11:31:44.866477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.342 [2024-10-07 11:31:44.866602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.342 [2024-10-07 11:31:44.866634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.342 [2024-10-07 11:31:44.866651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.342 [2024-10-07 11:31:44.866687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.342 [2024-10-07 11:31:44.866725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.342 [2024-10-07 11:31:44.866743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.342 [2024-10-07 11:31:44.866757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.342 [2024-10-07 11:31:44.866791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.342 [2024-10-07 11:31:44.876901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.342 [2024-10-07 11:31:44.877020] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.342 [2024-10-07 11:31:44.877052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.342 [2024-10-07 11:31:44.877070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.342 [2024-10-07 11:31:44.877344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.342 [2024-10-07 11:31:44.877508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.342 [2024-10-07 11:31:44.877551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.342 [2024-10-07 11:31:44.877569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.342 [2024-10-07 11:31:44.877681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.342 [2024-10-07 11:31:44.887005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.342 [2024-10-07 11:31:44.887122] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.342 [2024-10-07 11:31:44.887154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.342 [2024-10-07 11:31:44.887172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.342 [2024-10-07 11:31:44.887213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.342 [2024-10-07 11:31:44.887269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.342 [2024-10-07 11:31:44.887288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.342 [2024-10-07 11:31:44.887303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.342 [2024-10-07 11:31:44.887352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.342 [2024-10-07 11:31:44.897835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.342 [2024-10-07 11:31:44.897953] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.342 [2024-10-07 11:31:44.897985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.342 [2024-10-07 11:31:44.898003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.342 [2024-10-07 11:31:44.898040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.342 [2024-10-07 11:31:44.898076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.342 [2024-10-07 11:31:44.898095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.342 [2024-10-07 11:31:44.898109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.342 [2024-10-07 11:31:44.898143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.342 [2024-10-07 11:31:44.908522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.342 [2024-10-07 11:31:44.908642] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.342 [2024-10-07 11:31:44.908674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.342 [2024-10-07 11:31:44.908692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.342 [2024-10-07 11:31:44.908727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.342 [2024-10-07 11:31:44.908764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.342 [2024-10-07 11:31:44.908781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.342 [2024-10-07 11:31:44.908796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.342 [2024-10-07 11:31:44.908830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.342 [2024-10-07 11:31:44.918974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.342 [2024-10-07 11:31:44.919093] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.342 [2024-10-07 11:31:44.919125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.342 [2024-10-07 11:31:44.919143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.342 [2024-10-07 11:31:44.919414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.343 [2024-10-07 11:31:44.919587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.343 [2024-10-07 11:31:44.919623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.343 [2024-10-07 11:31:44.919641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.343 [2024-10-07 11:31:44.919753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.343 [2024-10-07 11:31:44.929074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.343 [2024-10-07 11:31:44.929192] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.343 [2024-10-07 11:31:44.929224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.343 [2024-10-07 11:31:44.929241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.343 [2024-10-07 11:31:44.929276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.343 [2024-10-07 11:31:44.929312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.343 [2024-10-07 11:31:44.929348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.343 [2024-10-07 11:31:44.929363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.343 [2024-10-07 11:31:44.929399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.343 [2024-10-07 11:31:44.939903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.343 [2024-10-07 11:31:44.940025] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.343 [2024-10-07 11:31:44.940058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.343 [2024-10-07 11:31:44.940075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.343 [2024-10-07 11:31:44.940111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.343 [2024-10-07 11:31:44.940147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.343 [2024-10-07 11:31:44.940167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.343 [2024-10-07 11:31:44.940181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.343 [2024-10-07 11:31:44.940215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.343 [2024-10-07 11:31:44.950546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.343 [2024-10-07 11:31:44.950666] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.343 [2024-10-07 11:31:44.950698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.343 [2024-10-07 11:31:44.950715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.343 [2024-10-07 11:31:44.950751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.343 [2024-10-07 11:31:44.950787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.343 [2024-10-07 11:31:44.950806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.343 [2024-10-07 11:31:44.950820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.343 [2024-10-07 11:31:44.950855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.343 [2024-10-07 11:31:44.960878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.343 [2024-10-07 11:31:44.961005] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.343 [2024-10-07 11:31:44.961037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.343 [2024-10-07 11:31:44.961072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.343 [2024-10-07 11:31:44.961368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.343 [2024-10-07 11:31:44.961538] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.343 [2024-10-07 11:31:44.961573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.343 [2024-10-07 11:31:44.961591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.343 [2024-10-07 11:31:44.961703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.343 [2024-10-07 11:31:44.970989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.343 [2024-10-07 11:31:44.971107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.343 [2024-10-07 11:31:44.971139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.343 [2024-10-07 11:31:44.971156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.343 [2024-10-07 11:31:44.971191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.343 [2024-10-07 11:31:44.971228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.343 [2024-10-07 11:31:44.971246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.343 [2024-10-07 11:31:44.971260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.343 [2024-10-07 11:31:44.971294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.343 [2024-10-07 11:31:44.981698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.343 [2024-10-07 11:31:44.981818] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.343 [2024-10-07 11:31:44.981850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.343 [2024-10-07 11:31:44.981867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.343 [2024-10-07 11:31:44.981904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.343 [2024-10-07 11:31:44.981940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.343 [2024-10-07 11:31:44.981959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.343 [2024-10-07 11:31:44.981973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.343 [2024-10-07 11:31:44.982007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.343 [2024-10-07 11:31:44.992240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.343 [2024-10-07 11:31:44.992372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.343 [2024-10-07 11:31:44.992404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.343 [2024-10-07 11:31:44.992422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.343 [2024-10-07 11:31:44.992459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.343 [2024-10-07 11:31:44.992496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.343 [2024-10-07 11:31:44.992532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.343 [2024-10-07 11:31:44.992548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.343 [2024-10-07 11:31:44.992585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.343 [2024-10-07 11:31:45.002724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.343 [2024-10-07 11:31:45.002842] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.343 [2024-10-07 11:31:45.002874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.343 [2024-10-07 11:31:45.002891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.343 [2024-10-07 11:31:45.003147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.343 [2024-10-07 11:31:45.003310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.343 [2024-10-07 11:31:45.003358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.343 [2024-10-07 11:31:45.003376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.343 [2024-10-07 11:31:45.003488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.343 [2024-10-07 11:31:45.012822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.343 [2024-10-07 11:31:45.012941] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.343 [2024-10-07 11:31:45.012973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.343 [2024-10-07 11:31:45.012991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.343 [2024-10-07 11:31:45.013027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.343 [2024-10-07 11:31:45.013072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.343 [2024-10-07 11:31:45.013089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.343 [2024-10-07 11:31:45.013104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.343 [2024-10-07 11:31:45.013140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.343 [2024-10-07 11:31:45.023535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.343 [2024-10-07 11:31:45.023655] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.344 [2024-10-07 11:31:45.023687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.344 [2024-10-07 11:31:45.023704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.344 [2024-10-07 11:31:45.023740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.344 [2024-10-07 11:31:45.023776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.344 [2024-10-07 11:31:45.023795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.344 [2024-10-07 11:31:45.023810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.344 [2024-10-07 11:31:45.023844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.344 [2024-10-07 11:31:45.034104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.344 [2024-10-07 11:31:45.034242] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.344 [2024-10-07 11:31:45.034275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.344 [2024-10-07 11:31:45.034313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.344 [2024-10-07 11:31:45.034369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.344 [2024-10-07 11:31:45.034406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.344 [2024-10-07 11:31:45.034425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.344 [2024-10-07 11:31:45.034439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.344 [2024-10-07 11:31:45.034474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.344 [2024-10-07 11:31:45.044525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.344 [2024-10-07 11:31:45.044648] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.344 [2024-10-07 11:31:45.044680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.344 [2024-10-07 11:31:45.044701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.344 [2024-10-07 11:31:45.044958] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.344 [2024-10-07 11:31:45.045109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.344 [2024-10-07 11:31:45.045146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.344 [2024-10-07 11:31:45.045164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.344 [2024-10-07 11:31:45.045277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.344 [2024-10-07 11:31:45.054711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.344 [2024-10-07 11:31:45.054831] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.344 [2024-10-07 11:31:45.054863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.344 [2024-10-07 11:31:45.054881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.344 [2024-10-07 11:31:45.054917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.344 [2024-10-07 11:31:45.054953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.344 [2024-10-07 11:31:45.054971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.344 [2024-10-07 11:31:45.054986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.344 [2024-10-07 11:31:45.055020] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.344 [2024-10-07 11:31:45.065497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.344 [2024-10-07 11:31:45.065617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.344 [2024-10-07 11:31:45.065649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.344 [2024-10-07 11:31:45.065667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.344 [2024-10-07 11:31:45.065720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.344 [2024-10-07 11:31:45.065758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.344 [2024-10-07 11:31:45.065777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.344 [2024-10-07 11:31:45.065791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.344 [2024-10-07 11:31:45.065827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.344 [2024-10-07 11:31:45.076055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.344 [2024-10-07 11:31:45.076173] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.344 [2024-10-07 11:31:45.076205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.344 [2024-10-07 11:31:45.076223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.344 [2024-10-07 11:31:45.076270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.344 [2024-10-07 11:31:45.076308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.344 [2024-10-07 11:31:45.076342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.344 [2024-10-07 11:31:45.076357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.344 [2024-10-07 11:31:45.076397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.344 [2024-10-07 11:31:45.086417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.344 [2024-10-07 11:31:45.086537] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.344 [2024-10-07 11:31:45.086569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.344 [2024-10-07 11:31:45.086586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.344 [2024-10-07 11:31:45.086842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.344 [2024-10-07 11:31:45.086994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.344 [2024-10-07 11:31:45.087030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.344 [2024-10-07 11:31:45.087048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.344 [2024-10-07 11:31:45.087161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.344 [2024-10-07 11:31:45.096519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.344 [2024-10-07 11:31:45.096644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.344 [2024-10-07 11:31:45.096676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.344 [2024-10-07 11:31:45.096693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.344 [2024-10-07 11:31:45.096729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.344 [2024-10-07 11:31:45.096766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.344 [2024-10-07 11:31:45.096785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.344 [2024-10-07 11:31:45.096831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.344 [2024-10-07 11:31:45.096869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.344 [2024-10-07 11:31:45.107246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.344 [2024-10-07 11:31:45.107380] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.344 [2024-10-07 11:31:45.107412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.344 [2024-10-07 11:31:45.107430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.344 [2024-10-07 11:31:45.107466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.344 [2024-10-07 11:31:45.107502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.344 [2024-10-07 11:31:45.107520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.344 [2024-10-07 11:31:45.107535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.344 [2024-10-07 11:31:45.107569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.344 [2024-10-07 11:31:45.117844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.344 [2024-10-07 11:31:45.117964] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.344 [2024-10-07 11:31:45.117996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.344 [2024-10-07 11:31:45.118013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.344 [2024-10-07 11:31:45.118049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.344 [2024-10-07 11:31:45.118085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.344 [2024-10-07 11:31:45.118108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.344 [2024-10-07 11:31:45.118123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.344 [2024-10-07 11:31:45.118157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.344 [2024-10-07 11:31:45.128190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.344 [2024-10-07 11:31:45.128338] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.344 [2024-10-07 11:31:45.128370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.344 [2024-10-07 11:31:45.128388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.344 [2024-10-07 11:31:45.128646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.344 [2024-10-07 11:31:45.128822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.344 [2024-10-07 11:31:45.128858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.344 [2024-10-07 11:31:45.128875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.344 [2024-10-07 11:31:45.128987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.345 [2024-10-07 11:31:45.138300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.345 [2024-10-07 11:31:45.138431] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.345 [2024-10-07 11:31:45.138480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.345 [2024-10-07 11:31:45.138499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.345 [2024-10-07 11:31:45.138537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.345 [2024-10-07 11:31:45.138573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.345 [2024-10-07 11:31:45.138592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.345 [2024-10-07 11:31:45.138606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.345 [2024-10-07 11:31:45.138640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.345 [2024-10-07 11:31:45.149008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.345 [2024-10-07 11:31:45.149140] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.345 [2024-10-07 11:31:45.149172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.345 [2024-10-07 11:31:45.149191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.345 [2024-10-07 11:31:45.149227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.345 [2024-10-07 11:31:45.149264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.345 [2024-10-07 11:31:45.149282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.345 [2024-10-07 11:31:45.149297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.345 [2024-10-07 11:31:45.149348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.345 [2024-10-07 11:31:45.159630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.345 [2024-10-07 11:31:45.159753] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.345 [2024-10-07 11:31:45.159786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.345 [2024-10-07 11:31:45.159803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.345 [2024-10-07 11:31:45.159839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.345 [2024-10-07 11:31:45.159875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.345 [2024-10-07 11:31:45.159894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.345 [2024-10-07 11:31:45.159908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.345 [2024-10-07 11:31:45.159942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.345 [2024-10-07 11:31:45.170024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.345 [2024-10-07 11:31:45.170143] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.345 [2024-10-07 11:31:45.170175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.345 [2024-10-07 11:31:45.170192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.345 [2024-10-07 11:31:45.170497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.345 [2024-10-07 11:31:45.170685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.345 [2024-10-07 11:31:45.170722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.345 [2024-10-07 11:31:45.170740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.345 [2024-10-07 11:31:45.170852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.345 [2024-10-07 11:31:45.180133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.345 [2024-10-07 11:31:45.180252] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.345 [2024-10-07 11:31:45.180284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.345 [2024-10-07 11:31:45.180301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.345 [2024-10-07 11:31:45.180351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.345 [2024-10-07 11:31:45.180390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.345 [2024-10-07 11:31:45.180409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.345 [2024-10-07 11:31:45.180423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.345 [2024-10-07 11:31:45.180457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.345 [2024-10-07 11:31:45.190828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.345 [2024-10-07 11:31:45.190945] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.345 [2024-10-07 11:31:45.190977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.345 [2024-10-07 11:31:45.190995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.345 [2024-10-07 11:31:45.191031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.345 [2024-10-07 11:31:45.191067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.345 [2024-10-07 11:31:45.191086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.345 [2024-10-07 11:31:45.191101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.345 [2024-10-07 11:31:45.191134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.345 [2024-10-07 11:31:45.201607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.345 [2024-10-07 11:31:45.201727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.345 [2024-10-07 11:31:45.201759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.345 [2024-10-07 11:31:45.201777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.345 [2024-10-07 11:31:45.201813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.345 [2024-10-07 11:31:45.201859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.345 [2024-10-07 11:31:45.201877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.345 [2024-10-07 11:31:45.201891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.345 [2024-10-07 11:31:45.201925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.345 [2024-10-07 11:31:45.211995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.345 [2024-10-07 11:31:45.212116] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.345 [2024-10-07 11:31:45.212148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.345 [2024-10-07 11:31:45.212165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.345 [2024-10-07 11:31:45.212446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.345 [2024-10-07 11:31:45.212610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.345 [2024-10-07 11:31:45.212646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.345 [2024-10-07 11:31:45.212664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.345 [2024-10-07 11:31:45.212776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.345 [2024-10-07 11:31:45.222163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.345 [2024-10-07 11:31:45.222283] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.345 [2024-10-07 11:31:45.222338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.345 [2024-10-07 11:31:45.222357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.345 [2024-10-07 11:31:45.222395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.345 [2024-10-07 11:31:45.222433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.345 [2024-10-07 11:31:45.222458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.345 [2024-10-07 11:31:45.222472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.345 [2024-10-07 11:31:45.222507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.345 [2024-10-07 11:31:45.233022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.345 [2024-10-07 11:31:45.233155] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.345 [2024-10-07 11:31:45.233187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.345 [2024-10-07 11:31:45.233205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.345 [2024-10-07 11:31:45.233242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.345 [2024-10-07 11:31:45.233279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.345 [2024-10-07 11:31:45.233298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.345 [2024-10-07 11:31:45.233312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.345 [2024-10-07 11:31:45.233366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.345 [2024-10-07 11:31:45.244376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.345 [2024-10-07 11:31:45.244555] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.345 [2024-10-07 11:31:45.244606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.345 [2024-10-07 11:31:45.244687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.345 [2024-10-07 11:31:45.245725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.345 [2024-10-07 11:31:45.245946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.345 [2024-10-07 11:31:45.245993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.345 [2024-10-07 11:31:45.246011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.346 [2024-10-07 11:31:45.246989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.346 [2024-10-07 11:31:45.254829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.346 [2024-10-07 11:31:45.255032] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.346 [2024-10-07 11:31:45.255076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.346 [2024-10-07 11:31:45.255097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.346 [2024-10-07 11:31:45.255141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.346 [2024-10-07 11:31:45.255179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.346 [2024-10-07 11:31:45.255198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.346 [2024-10-07 11:31:45.255213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.346 [2024-10-07 11:31:45.255247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.346 [2024-10-07 11:31:45.265848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.346 [2024-10-07 11:31:45.265971] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.346 [2024-10-07 11:31:45.266004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.346 [2024-10-07 11:31:45.266022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.346 [2024-10-07 11:31:45.266058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.346 [2024-10-07 11:31:45.266095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.346 [2024-10-07 11:31:45.266113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.346 [2024-10-07 11:31:45.266128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.346 [2024-10-07 11:31:45.266162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.346 [2024-10-07 11:31:45.275951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.346 [2024-10-07 11:31:45.276072] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.346 [2024-10-07 11:31:45.276105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.346 [2024-10-07 11:31:45.276123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.346 [2024-10-07 11:31:45.276159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.346 [2024-10-07 11:31:45.276196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.346 [2024-10-07 11:31:45.276232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.346 [2024-10-07 11:31:45.276248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.346 [2024-10-07 11:31:45.276284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.346 [2024-10-07 11:31:45.286059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.346 [2024-10-07 11:31:45.286180] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.346 [2024-10-07 11:31:45.286213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.346 [2024-10-07 11:31:45.286231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.346 [2024-10-07 11:31:45.286267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.346 [2024-10-07 11:31:45.286332] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.346 [2024-10-07 11:31:45.286355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.346 [2024-10-07 11:31:45.286370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.346 [2024-10-07 11:31:45.286404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.346 [2024-10-07 11:31:45.296381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.346 [2024-10-07 11:31:45.296500] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.346 [2024-10-07 11:31:45.296532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.346 [2024-10-07 11:31:45.296550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.346 [2024-10-07 11:31:45.296807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.346 [2024-10-07 11:31:45.296971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.346 [2024-10-07 11:31:45.297006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.346 [2024-10-07 11:31:45.297024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.346 [2024-10-07 11:31:45.297137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.346 [2024-10-07 11:31:45.306545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.346 [2024-10-07 11:31:45.306664] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.346 [2024-10-07 11:31:45.306696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.346 [2024-10-07 11:31:45.306713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.346 [2024-10-07 11:31:45.306749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.346 [2024-10-07 11:31:45.306786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.346 [2024-10-07 11:31:45.306804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.346 [2024-10-07 11:31:45.306818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.346 [2024-10-07 11:31:45.306853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.346 [2024-10-07 11:31:45.317406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.346 [2024-10-07 11:31:45.317550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.346 [2024-10-07 11:31:45.317584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.346 [2024-10-07 11:31:45.317602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.346 [2024-10-07 11:31:45.317638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.346 [2024-10-07 11:31:45.317675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.346 [2024-10-07 11:31:45.317694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.346 [2024-10-07 11:31:45.317708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.346 [2024-10-07 11:31:45.317743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.346 [2024-10-07 11:31:45.328071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.346 [2024-10-07 11:31:45.328190] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.346 [2024-10-07 11:31:45.328222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.346 [2024-10-07 11:31:45.328240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.346 [2024-10-07 11:31:45.328275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.346 [2024-10-07 11:31:45.328312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.346 [2024-10-07 11:31:45.328346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.346 [2024-10-07 11:31:45.328362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.346 [2024-10-07 11:31:45.328397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.346 [2024-10-07 11:31:45.338522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.346 [2024-10-07 11:31:45.338641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.346 [2024-10-07 11:31:45.338673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.346 [2024-10-07 11:31:45.338691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.346 [2024-10-07 11:31:45.338948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.346 [2024-10-07 11:31:45.339113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.346 [2024-10-07 11:31:45.339149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.346 [2024-10-07 11:31:45.339166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.346 [2024-10-07 11:31:45.339278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.346 [2024-10-07 11:31:45.348669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.346 [2024-10-07 11:31:45.348787] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.346 [2024-10-07 11:31:45.348820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.346 [2024-10-07 11:31:45.348837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.346 [2024-10-07 11:31:45.348893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.346 [2024-10-07 11:31:45.348930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.346 [2024-10-07 11:31:45.348949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.346 [2024-10-07 11:31:45.348963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.346 [2024-10-07 11:31:45.348999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.346 [2024-10-07 11:31:45.359518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.346 [2024-10-07 11:31:45.359646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.346 [2024-10-07 11:31:45.359679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.346 [2024-10-07 11:31:45.359698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.346 [2024-10-07 11:31:45.359734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.346 [2024-10-07 11:31:45.359771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.346 [2024-10-07 11:31:45.359790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.347 [2024-10-07 11:31:45.359805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.347 [2024-10-07 11:31:45.359839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.347 [2024-10-07 11:31:45.370910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.347 [2024-10-07 11:31:45.371039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.347 [2024-10-07 11:31:45.371072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.347 [2024-10-07 11:31:45.371090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.347 [2024-10-07 11:31:45.371128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.347 [2024-10-07 11:31:45.371165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.347 [2024-10-07 11:31:45.371184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.347 [2024-10-07 11:31:45.371199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.347 [2024-10-07 11:31:45.371234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.347 [2024-10-07 11:31:45.382990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.347 [2024-10-07 11:31:45.383115] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.347 [2024-10-07 11:31:45.383147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.347 [2024-10-07 11:31:45.383166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.347 [2024-10-07 11:31:45.383204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.347 [2024-10-07 11:31:45.383242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.347 [2024-10-07 11:31:45.383260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.347 [2024-10-07 11:31:45.383298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.347 [2024-10-07 11:31:45.383357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.347 [2024-10-07 11:31:45.394279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.347 [2024-10-07 11:31:45.394492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.347 [2024-10-07 11:31:45.394531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.347 [2024-10-07 11:31:45.394549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.347 [2024-10-07 11:31:45.394585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.347 [2024-10-07 11:31:45.394622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.347 [2024-10-07 11:31:45.394640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.347 [2024-10-07 11:31:45.394654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.347 [2024-10-07 11:31:45.394689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.347 [2024-10-07 11:31:45.405278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.347 [2024-10-07 11:31:45.405411] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.347 [2024-10-07 11:31:45.405444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.347 [2024-10-07 11:31:45.405461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.347 [2024-10-07 11:31:45.405497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.347 [2024-10-07 11:31:45.405533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.347 [2024-10-07 11:31:45.405552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.347 [2024-10-07 11:31:45.405566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.347 [2024-10-07 11:31:45.405600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.347 [2024-10-07 11:31:45.415970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.347 [2024-10-07 11:31:45.416089] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.347 [2024-10-07 11:31:45.416120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.347 [2024-10-07 11:31:45.416137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.347 [2024-10-07 11:31:45.416173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.347 [2024-10-07 11:31:45.416209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.347 [2024-10-07 11:31:45.416227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.347 [2024-10-07 11:31:45.416241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.347 [2024-10-07 11:31:45.416275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.347 [2024-10-07 11:31:45.426456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.347 [2024-10-07 11:31:45.426575] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.347 [2024-10-07 11:31:45.426623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.347 [2024-10-07 11:31:45.426642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.347 [2024-10-07 11:31:45.426899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.347 [2024-10-07 11:31:45.427052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.347 [2024-10-07 11:31:45.427088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.347 [2024-10-07 11:31:45.427107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.347 [2024-10-07 11:31:45.427218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.347 [2024-10-07 11:31:45.436614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.347 [2024-10-07 11:31:45.436739] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.347 [2024-10-07 11:31:45.436771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.347 [2024-10-07 11:31:45.436788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.347 [2024-10-07 11:31:45.436823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.347 [2024-10-07 11:31:45.436861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.347 [2024-10-07 11:31:45.436878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.347 [2024-10-07 11:31:45.436892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.347 [2024-10-07 11:31:45.436925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.347 [2024-10-07 11:31:45.447433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.347 [2024-10-07 11:31:45.447552] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.347 [2024-10-07 11:31:45.447583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.347 [2024-10-07 11:31:45.447600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.347 [2024-10-07 11:31:45.447636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.347 [2024-10-07 11:31:45.447671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.347 [2024-10-07 11:31:45.447689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.347 [2024-10-07 11:31:45.447704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.347 [2024-10-07 11:31:45.447738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.347 [2024-10-07 11:31:45.458010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.347 [2024-10-07 11:31:45.458131] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.347 [2024-10-07 11:31:45.458162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.347 [2024-10-07 11:31:45.458180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.347 [2024-10-07 11:31:45.458216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.347 [2024-10-07 11:31:45.458271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.347 [2024-10-07 11:31:45.458306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.347 [2024-10-07 11:31:45.458337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.347 [2024-10-07 11:31:45.458376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.347 [2024-10-07 11:31:45.468347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.347 [2024-10-07 11:31:45.468466] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.347 [2024-10-07 11:31:45.468498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.347 [2024-10-07 11:31:45.468516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.347 [2024-10-07 11:31:45.468772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.348 [2024-10-07 11:31:45.468924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.348 [2024-10-07 11:31:45.468961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.348 [2024-10-07 11:31:45.468979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.348 [2024-10-07 11:31:45.469097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.348 [2024-10-07 11:31:45.478454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.348 [2024-10-07 11:31:45.478572] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.348 [2024-10-07 11:31:45.478604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.348 [2024-10-07 11:31:45.478621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.348 [2024-10-07 11:31:45.478657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.348 [2024-10-07 11:31:45.478694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.348 [2024-10-07 11:31:45.478711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.348 [2024-10-07 11:31:45.478725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.348 [2024-10-07 11:31:45.479467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.348 [2024-10-07 11:31:45.488996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.348 [2024-10-07 11:31:45.489122] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.348 [2024-10-07 11:31:45.489154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.348 [2024-10-07 11:31:45.489171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.348 [2024-10-07 11:31:45.489207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.348 [2024-10-07 11:31:45.489244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.348 [2024-10-07 11:31:45.489262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.348 [2024-10-07 11:31:45.489276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.348 [2024-10-07 11:31:45.489347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.348 8811.89 IOPS, 34.42 MiB/s [2024-10-07T11:31:52.871Z] [2024-10-07 11:31:45.501966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.348 [2024-10-07 11:31:45.502253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.348 [2024-10-07 11:31:45.502298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.348 [2024-10-07 11:31:45.502332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.348 [2024-10-07 11:31:45.503257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.348 [2024-10-07 11:31:45.503493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.348 [2024-10-07 11:31:45.503520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.348 [2024-10-07 11:31:45.503534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.348 [2024-10-07 11:31:45.503571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.348 [2024-10-07 11:31:45.513451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.348 [2024-10-07 11:31:45.513572] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.348 [2024-10-07 11:31:45.513604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.348 [2024-10-07 11:31:45.513621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.348 [2024-10-07 11:31:45.513657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.348 [2024-10-07 11:31:45.513692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.348 [2024-10-07 11:31:45.513711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.348 [2024-10-07 11:31:45.513726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.348 [2024-10-07 11:31:45.513760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.348 [2024-10-07 11:31:45.523781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.348 [2024-10-07 11:31:45.523902] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.348 [2024-10-07 11:31:45.523933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.348 [2024-10-07 11:31:45.523951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.348 [2024-10-07 11:31:45.524206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.348 [2024-10-07 11:31:45.524374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.348 [2024-10-07 11:31:45.524411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.348 [2024-10-07 11:31:45.524428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.348 [2024-10-07 11:31:45.524541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.348 [2024-10-07 11:31:45.533892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.348 [2024-10-07 11:31:45.534009] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.348 [2024-10-07 11:31:45.534041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.348 [2024-10-07 11:31:45.534080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.348 [2024-10-07 11:31:45.534128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.348 [2024-10-07 11:31:45.534165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.348 [2024-10-07 11:31:45.534183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.348 [2024-10-07 11:31:45.534197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.348 [2024-10-07 11:31:45.534231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.348 [2024-10-07 11:31:45.544835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.348 [2024-10-07 11:31:45.544953] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.348 [2024-10-07 11:31:45.544984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.348 [2024-10-07 11:31:45.545002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.348 [2024-10-07 11:31:45.545038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.348 [2024-10-07 11:31:45.545074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.348 [2024-10-07 11:31:45.545092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.348 [2024-10-07 11:31:45.545106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.348 [2024-10-07 11:31:45.545140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.348 [2024-10-07 11:31:45.555512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.348 [2024-10-07 11:31:45.555639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.348 [2024-10-07 11:31:45.555670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.348 [2024-10-07 11:31:45.555688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.348 [2024-10-07 11:31:45.555723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.348 [2024-10-07 11:31:45.555758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.348 [2024-10-07 11:31:45.555776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.348 [2024-10-07 11:31:45.555790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.348 [2024-10-07 11:31:45.555824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.348 [2024-10-07 11:31:45.565967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.348 [2024-10-07 11:31:45.566088] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.348 [2024-10-07 11:31:45.566119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.348 [2024-10-07 11:31:45.566136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.348 [2024-10-07 11:31:45.566420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.348 [2024-10-07 11:31:45.566575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.348 [2024-10-07 11:31:45.566619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.348 [2024-10-07 11:31:45.566636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.348 [2024-10-07 11:31:45.566749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.348 [2024-10-07 11:31:45.576082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.348 [2024-10-07 11:31:45.576200] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.349 [2024-10-07 11:31:45.576231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.349 [2024-10-07 11:31:45.576249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.349 [2024-10-07 11:31:45.576285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.349 [2024-10-07 11:31:45.576338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.349 [2024-10-07 11:31:45.576360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.349 [2024-10-07 11:31:45.576374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.349 [2024-10-07 11:31:45.576410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.349 [2024-10-07 11:31:45.586999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.349 [2024-10-07 11:31:45.587120] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.349 [2024-10-07 11:31:45.587152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.349 [2024-10-07 11:31:45.587169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.349 [2024-10-07 11:31:45.587205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.349 [2024-10-07 11:31:45.587243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.349 [2024-10-07 11:31:45.587261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.349 [2024-10-07 11:31:45.587275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.349 [2024-10-07 11:31:45.587309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.349 [2024-10-07 11:31:45.597738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.349 [2024-10-07 11:31:45.597859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.349 [2024-10-07 11:31:45.597891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.349 [2024-10-07 11:31:45.597909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.349 [2024-10-07 11:31:45.597945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.349 [2024-10-07 11:31:45.597981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.349 [2024-10-07 11:31:45.598000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.349 [2024-10-07 11:31:45.598014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.349 [2024-10-07 11:31:45.598048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.349 [2024-10-07 11:31:45.608210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.349 [2024-10-07 11:31:45.608346] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.349 [2024-10-07 11:31:45.608379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.349 [2024-10-07 11:31:45.608397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.349 [2024-10-07 11:31:45.608434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.349 [2024-10-07 11:31:45.608470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.349 [2024-10-07 11:31:45.608488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.349 [2024-10-07 11:31:45.608502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.349 [2024-10-07 11:31:45.608757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.349 [2024-10-07 11:31:45.618525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.349 [2024-10-07 11:31:45.618645] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.349 [2024-10-07 11:31:45.618676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.349 [2024-10-07 11:31:45.618694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.349 [2024-10-07 11:31:45.618730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.349 [2024-10-07 11:31:45.618776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.349 [2024-10-07 11:31:45.618794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.349 [2024-10-07 11:31:45.618808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.349 [2024-10-07 11:31:45.618842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.349 [2024-10-07 11:31:45.629413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.349 [2024-10-07 11:31:45.629533] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.349 [2024-10-07 11:31:45.629565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.349 [2024-10-07 11:31:45.629583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.349 [2024-10-07 11:31:45.629620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.349 [2024-10-07 11:31:45.629657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.349 [2024-10-07 11:31:45.629675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.349 [2024-10-07 11:31:45.629689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.349 [2024-10-07 11:31:45.629723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.349 [2024-10-07 11:31:45.640159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.349 [2024-10-07 11:31:45.640278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.349 [2024-10-07 11:31:45.640310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.349 [2024-10-07 11:31:45.640344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.349 [2024-10-07 11:31:45.640407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.349 [2024-10-07 11:31:45.640445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.349 [2024-10-07 11:31:45.640463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.349 [2024-10-07 11:31:45.640477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.349 [2024-10-07 11:31:45.640530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.349 [2024-10-07 11:31:45.650642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.349 [2024-10-07 11:31:45.650762] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.349 [2024-10-07 11:31:45.650793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.349 [2024-10-07 11:31:45.650811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.349 [2024-10-07 11:31:45.651078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.349 [2024-10-07 11:31:45.651230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.349 [2024-10-07 11:31:45.651266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.349 [2024-10-07 11:31:45.651284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.349 [2024-10-07 11:31:45.651420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.349 [2024-10-07 11:31:45.660790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.349 [2024-10-07 11:31:45.660911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.349 [2024-10-07 11:31:45.660942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.349 [2024-10-07 11:31:45.660960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.349 [2024-10-07 11:31:45.660996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.349 [2024-10-07 11:31:45.661033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.349 [2024-10-07 11:31:45.661051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.349 [2024-10-07 11:31:45.661065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.349 [2024-10-07 11:31:45.661099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.349 [2024-10-07 11:31:45.671643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.349 [2024-10-07 11:31:45.671762] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.349 [2024-10-07 11:31:45.671794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.349 [2024-10-07 11:31:45.671812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.349 [2024-10-07 11:31:45.671848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.349 [2024-10-07 11:31:45.671884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.349 [2024-10-07 11:31:45.671903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.349 [2024-10-07 11:31:45.671935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.349 [2024-10-07 11:31:45.671972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.349 [2024-10-07 11:31:45.682259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.349 [2024-10-07 11:31:45.682403] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.349 [2024-10-07 11:31:45.682437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.349 [2024-10-07 11:31:45.682454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.349 [2024-10-07 11:31:45.682490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.349 [2024-10-07 11:31:45.682526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.349 [2024-10-07 11:31:45.682544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.350 [2024-10-07 11:31:45.682558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.350 [2024-10-07 11:31:45.682594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.350 [2024-10-07 11:31:45.692692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.350 [2024-10-07 11:31:45.692812] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.350 [2024-10-07 11:31:45.692843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.350 [2024-10-07 11:31:45.692861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.350 [2024-10-07 11:31:45.693119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.350 [2024-10-07 11:31:45.693272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.350 [2024-10-07 11:31:45.693297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.350 [2024-10-07 11:31:45.693312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.350 [2024-10-07 11:31:45.693441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.350 [2024-10-07 11:31:45.702837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.350 [2024-10-07 11:31:45.702957] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.350 [2024-10-07 11:31:45.702989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.350 [2024-10-07 11:31:45.703007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.350 [2024-10-07 11:31:45.703043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.350 [2024-10-07 11:31:45.703080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.350 [2024-10-07 11:31:45.703098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.350 [2024-10-07 11:31:45.703113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.350 [2024-10-07 11:31:45.703147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.350 [2024-10-07 11:31:45.713626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.350 [2024-10-07 11:31:45.713745] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.350 [2024-10-07 11:31:45.713794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.350 [2024-10-07 11:31:45.713813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.350 [2024-10-07 11:31:45.713850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.350 [2024-10-07 11:31:45.713887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.350 [2024-10-07 11:31:45.713905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.350 [2024-10-07 11:31:45.713919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.350 [2024-10-07 11:31:45.713953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.350 [2024-10-07 11:31:45.724278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.350 [2024-10-07 11:31:45.724410] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.350 [2024-10-07 11:31:45.724443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.350 [2024-10-07 11:31:45.724461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.350 [2024-10-07 11:31:45.724497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.350 [2024-10-07 11:31:45.724534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.350 [2024-10-07 11:31:45.724552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.350 [2024-10-07 11:31:45.724566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.350 [2024-10-07 11:31:45.724600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.350 [2024-10-07 11:31:45.734851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.350 [2024-10-07 11:31:45.734969] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.350 [2024-10-07 11:31:45.735001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.350 [2024-10-07 11:31:45.735019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.350 [2024-10-07 11:31:45.735055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.350 [2024-10-07 11:31:45.735092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.350 [2024-10-07 11:31:45.735110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.350 [2024-10-07 11:31:45.735124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.350 [2024-10-07 11:31:45.735393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.350 [2024-10-07 11:31:45.745134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.350 [2024-10-07 11:31:45.745252] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.350 [2024-10-07 11:31:45.745283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.350 [2024-10-07 11:31:45.745301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.350 [2024-10-07 11:31:45.745352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.350 [2024-10-07 11:31:45.745410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.350 [2024-10-07 11:31:45.745430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.350 [2024-10-07 11:31:45.745444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.350 [2024-10-07 11:31:45.745478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.350 [2024-10-07 11:31:45.756032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.350 [2024-10-07 11:31:45.756153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.350 [2024-10-07 11:31:45.756185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.350 [2024-10-07 11:31:45.756203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.350 [2024-10-07 11:31:45.756239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.350 [2024-10-07 11:31:45.756275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.350 [2024-10-07 11:31:45.756293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.350 [2024-10-07 11:31:45.756307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.350 [2024-10-07 11:31:45.756359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.350 [2024-10-07 11:31:45.767333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.350 [2024-10-07 11:31:45.767510] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.350 [2024-10-07 11:31:45.767558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.350 [2024-10-07 11:31:45.767589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.350 [2024-10-07 11:31:45.767649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.350 [2024-10-07 11:31:45.768712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.350 [2024-10-07 11:31:45.768771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.350 [2024-10-07 11:31:45.768803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.350 [2024-10-07 11:31:45.769064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.350 [2024-10-07 11:31:45.777833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.350 [2024-10-07 11:31:45.777977] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.350 [2024-10-07 11:31:45.778012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.350 [2024-10-07 11:31:45.778030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.350 [2024-10-07 11:31:45.778068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.350 [2024-10-07 11:31:45.778105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.350 [2024-10-07 11:31:45.778124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.350 [2024-10-07 11:31:45.778138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.350 [2024-10-07 11:31:45.778198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.350 [2024-10-07 11:31:45.788816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.350 [2024-10-07 11:31:45.788940] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.350 [2024-10-07 11:31:45.788972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.351 [2024-10-07 11:31:45.788990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.351 [2024-10-07 11:31:45.789026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.351 [2024-10-07 11:31:45.789062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.351 [2024-10-07 11:31:45.789080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.351 [2024-10-07 11:31:45.789094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.351 [2024-10-07 11:31:45.789129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.351 [2024-10-07 11:31:45.798922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.351 [2024-10-07 11:31:45.799040] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.351 [2024-10-07 11:31:45.799072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.351 [2024-10-07 11:31:45.799089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.351 [2024-10-07 11:31:45.799125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.351 [2024-10-07 11:31:45.799160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.351 [2024-10-07 11:31:45.799178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.351 [2024-10-07 11:31:45.799193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.351 [2024-10-07 11:31:45.799227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.351 [2024-10-07 11:31:45.809028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.351 [2024-10-07 11:31:45.809146] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.351 [2024-10-07 11:31:45.809178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.351 [2024-10-07 11:31:45.809195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.351 [2024-10-07 11:31:45.809231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.351 [2024-10-07 11:31:45.809266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.351 [2024-10-07 11:31:45.809285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.351 [2024-10-07 11:31:45.809299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.351 [2024-10-07 11:31:45.809859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.351 [2024-10-07 11:31:45.819464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.351 [2024-10-07 11:31:45.819585] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.351 [2024-10-07 11:31:45.819618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.351 [2024-10-07 11:31:45.819655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.351 [2024-10-07 11:31:45.819931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.351 [2024-10-07 11:31:45.820087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.351 [2024-10-07 11:31:45.820123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.351 [2024-10-07 11:31:45.820140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.351 [2024-10-07 11:31:45.820254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.351 [2024-10-07 11:31:45.829588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.351 [2024-10-07 11:31:45.829710] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.351 [2024-10-07 11:31:45.829741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.351 [2024-10-07 11:31:45.829759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.351 [2024-10-07 11:31:45.829795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.351 [2024-10-07 11:31:45.829831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.351 [2024-10-07 11:31:45.829850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.351 [2024-10-07 11:31:45.829864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.351 [2024-10-07 11:31:45.829898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.351 [2024-10-07 11:31:45.840432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.351 [2024-10-07 11:31:45.840552] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.351 [2024-10-07 11:31:45.840585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.351 [2024-10-07 11:31:45.840602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.351 [2024-10-07 11:31:45.840639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.351 [2024-10-07 11:31:45.840675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.351 [2024-10-07 11:31:45.840693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.351 [2024-10-07 11:31:45.840707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.351 [2024-10-07 11:31:45.840741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.351 [2024-10-07 11:31:45.851029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.351 [2024-10-07 11:31:45.851148] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.351 [2024-10-07 11:31:45.851180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.351 [2024-10-07 11:31:45.851198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.351 [2024-10-07 11:31:45.851233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.351 [2024-10-07 11:31:45.851270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.351 [2024-10-07 11:31:45.851306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.351 [2024-10-07 11:31:45.851346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.351 [2024-10-07 11:31:45.851384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.351 [2024-10-07 11:31:45.861469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.351 [2024-10-07 11:31:45.861586] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.351 [2024-10-07 11:31:45.861619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.351 [2024-10-07 11:31:45.861647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.351 [2024-10-07 11:31:45.861902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.351 [2024-10-07 11:31:45.862066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.351 [2024-10-07 11:31:45.862101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.351 [2024-10-07 11:31:45.862118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.351 [2024-10-07 11:31:45.862229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.351 [2024-10-07 11:31:45.871631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.351 [2024-10-07 11:31:45.871749] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.351 [2024-10-07 11:31:45.871781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.351 [2024-10-07 11:31:45.871798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.351 [2024-10-07 11:31:45.871835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.351 [2024-10-07 11:31:45.871871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.351 [2024-10-07 11:31:45.871890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.351 [2024-10-07 11:31:45.871904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.351 [2024-10-07 11:31:45.871939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.351 [2024-10-07 11:31:45.882718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.351 [2024-10-07 11:31:45.882847] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.351 [2024-10-07 11:31:45.882879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.351 [2024-10-07 11:31:45.882897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.351 [2024-10-07 11:31:45.882934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.351 [2024-10-07 11:31:45.882970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.351 [2024-10-07 11:31:45.882988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.351 [2024-10-07 11:31:45.883003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.351 [2024-10-07 11:31:45.883037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.351 [2024-10-07 11:31:45.893352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.351 [2024-10-07 11:31:45.893472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.351 [2024-10-07 11:31:45.893504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.351 [2024-10-07 11:31:45.893522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.351 [2024-10-07 11:31:45.893557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.351 [2024-10-07 11:31:45.893594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.351 [2024-10-07 11:31:45.893612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.351 [2024-10-07 11:31:45.893626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.351 [2024-10-07 11:31:45.893670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.352 [2024-10-07 11:31:45.903920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.352 [2024-10-07 11:31:45.904042] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.352 [2024-10-07 11:31:45.904082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.352 [2024-10-07 11:31:45.904100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.352 [2024-10-07 11:31:45.904370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.352 [2024-10-07 11:31:45.904525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.352 [2024-10-07 11:31:45.904560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.352 [2024-10-07 11:31:45.904578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.352 [2024-10-07 11:31:45.904701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.352 [2024-10-07 11:31:45.914125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.352 [2024-10-07 11:31:45.914232] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.352 [2024-10-07 11:31:45.914263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.352 [2024-10-07 11:31:45.914280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.352 [2024-10-07 11:31:45.914349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.352 [2024-10-07 11:31:45.914389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.352 [2024-10-07 11:31:45.914408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.352 [2024-10-07 11:31:45.914422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.352 [2024-10-07 11:31:45.914456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.352 [2024-10-07 11:31:45.925104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.352 [2024-10-07 11:31:45.925225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.352 [2024-10-07 11:31:45.925257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.352 [2024-10-07 11:31:45.925279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.352 [2024-10-07 11:31:45.925352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.352 [2024-10-07 11:31:45.925393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.352 [2024-10-07 11:31:45.925411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.352 [2024-10-07 11:31:45.925425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.352 [2024-10-07 11:31:45.925459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.352 [2024-10-07 11:31:45.935861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.352 [2024-10-07 11:31:45.935982] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.352 [2024-10-07 11:31:45.936014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.352 [2024-10-07 11:31:45.936032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.352 [2024-10-07 11:31:45.936068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.352 [2024-10-07 11:31:45.936103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.352 [2024-10-07 11:31:45.936121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.352 [2024-10-07 11:31:45.936135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.352 [2024-10-07 11:31:45.936169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.352 [2024-10-07 11:31:45.946459] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe16a00 was disconnected and freed. reset controller. 00:20:57.352 [2024-10-07 11:31:45.946611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.352 [2024-10-07 11:31:45.946679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.352 [2024-10-07 11:31:45.946943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.352 [2024-10-07 11:31:45.947250] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.352 [2024-10-07 11:31:45.947293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.352 [2024-10-07 11:31:45.947313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.352 [2024-10-07 11:31:45.947470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.352 [2024-10-07 11:31:45.947521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.352 [2024-10-07 11:31:45.947542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.352 [2024-10-07 11:31:45.947557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.352 [2024-10-07 11:31:45.947588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.352 [2024-10-07 11:31:45.950159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.352 [2024-10-07 11:31:45.950280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.352 [2024-10-07 11:31:45.950331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.352 [2024-10-07 11:31:45.950350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.352 [2024-10-07 11:31:45.950403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.352 [2024-10-07 11:31:45.957069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.352 [2024-10-07 11:31:45.957142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.352 [2024-10-07 11:31:45.957222] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.352 [2024-10-07 11:31:45.957250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.352 [2024-10-07 11:31:45.957267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.352 [2024-10-07 11:31:45.957345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.352 [2024-10-07 11:31:45.957374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.352 [2024-10-07 11:31:45.957390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.352 [2024-10-07 11:31:45.957409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.352 [2024-10-07 11:31:45.958140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.352 [2024-10-07 11:31:45.958178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.352 [2024-10-07 11:31:45.958194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.352 [2024-10-07 11:31:45.958209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.352 [2024-10-07 11:31:45.958417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.352 [2024-10-07 11:31:45.958444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.352 [2024-10-07 11:31:45.958459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.352 [2024-10-07 11:31:45.958472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.352 [2024-10-07 11:31:45.958563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.353 [2024-10-07 11:31:45.967700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.353 [2024-10-07 11:31:45.967750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.353 [2024-10-07 11:31:45.967843] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.353 [2024-10-07 11:31:45.967873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.353 [2024-10-07 11:31:45.967890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.353 [2024-10-07 11:31:45.967938] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.353 [2024-10-07 11:31:45.967961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.353 [2024-10-07 11:31:45.967976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.353 [2024-10-07 11:31:45.968008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.353 [2024-10-07 11:31:45.968031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.353 [2024-10-07 11:31:45.968057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.353 [2024-10-07 11:31:45.968091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.353 [2024-10-07 11:31:45.968107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.353 [2024-10-07 11:31:45.968123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.353 [2024-10-07 11:31:45.968137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.353 [2024-10-07 11:31:45.968150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.353 [2024-10-07 11:31:45.968181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.353 [2024-10-07 11:31:45.968198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.353 [2024-10-07 11:31:45.978458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.353 [2024-10-07 11:31:45.978513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.353 [2024-10-07 11:31:45.978608] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.353 [2024-10-07 11:31:45.978639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.353 [2024-10-07 11:31:45.978657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.353 [2024-10-07 11:31:45.978705] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.353 [2024-10-07 11:31:45.978728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.353 [2024-10-07 11:31:45.978743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.353 [2024-10-07 11:31:45.978775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.353 [2024-10-07 11:31:45.978798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.353 [2024-10-07 11:31:45.978825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.353 [2024-10-07 11:31:45.978843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.353 [2024-10-07 11:31:45.978857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.353 [2024-10-07 11:31:45.978873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.353 [2024-10-07 11:31:45.978887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.353 [2024-10-07 11:31:45.978901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.353 [2024-10-07 11:31:45.978930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.353 [2024-10-07 11:31:45.978947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.353 [2024-10-07 11:31:45.988721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.353 [2024-10-07 11:31:45.988770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.353 [2024-10-07 11:31:45.988862] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.353 [2024-10-07 11:31:45.988893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.353 [2024-10-07 11:31:45.988910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.353 [2024-10-07 11:31:45.988957] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.353 [2024-10-07 11:31:45.988998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.353 [2024-10-07 11:31:45.989016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.353 [2024-10-07 11:31:45.989269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.353 [2024-10-07 11:31:45.989300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.353 [2024-10-07 11:31:45.989452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.353 [2024-10-07 11:31:45.989489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.353 [2024-10-07 11:31:45.989507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.353 [2024-10-07 11:31:45.989524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.353 [2024-10-07 11:31:45.989538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.353 [2024-10-07 11:31:45.989552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.353 [2024-10-07 11:31:45.989660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.353 [2024-10-07 11:31:45.989680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.353 [2024-10-07 11:31:45.998847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.353 [2024-10-07 11:31:45.998896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.353 [2024-10-07 11:31:45.998987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.353 [2024-10-07 11:31:45.999017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.353 [2024-10-07 11:31:45.999034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.353 [2024-10-07 11:31:45.999082] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.353 [2024-10-07 11:31:45.999105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.353 [2024-10-07 11:31:45.999120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.353 [2024-10-07 11:31:45.999151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.353 [2024-10-07 11:31:45.999174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.353 [2024-10-07 11:31:45.999201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.353 [2024-10-07 11:31:45.999219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.353 [2024-10-07 11:31:45.999233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.353 [2024-10-07 11:31:45.999250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.353 [2024-10-07 11:31:45.999264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.353 [2024-10-07 11:31:45.999277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.353 [2024-10-07 11:31:46.000015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.353 [2024-10-07 11:31:46.000053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.353 [2024-10-07 11:31:46.009583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.353 [2024-10-07 11:31:46.009642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.353 [2024-10-07 11:31:46.009735] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.353 [2024-10-07 11:31:46.009766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.353 [2024-10-07 11:31:46.009783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.353 [2024-10-07 11:31:46.009831] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.353 [2024-10-07 11:31:46.009854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.353 [2024-10-07 11:31:46.009869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.353 [2024-10-07 11:31:46.009901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.353 [2024-10-07 11:31:46.009925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.353 [2024-10-07 11:31:46.009965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.353 [2024-10-07 11:31:46.009985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.353 [2024-10-07 11:31:46.009999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.353 [2024-10-07 11:31:46.010015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.353 [2024-10-07 11:31:46.010029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.353 [2024-10-07 11:31:46.010043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.353 [2024-10-07 11:31:46.010072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.353 [2024-10-07 11:31:46.010089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.353 [2024-10-07 11:31:46.020155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.353 [2024-10-07 11:31:46.020205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.353 [2024-10-07 11:31:46.020296] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.353 [2024-10-07 11:31:46.020343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.353 [2024-10-07 11:31:46.020362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.354 [2024-10-07 11:31:46.020412] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.354 [2024-10-07 11:31:46.020435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.354 [2024-10-07 11:31:46.020450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.354 [2024-10-07 11:31:46.020483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.354 [2024-10-07 11:31:46.020506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.354 [2024-10-07 11:31:46.020533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.354 [2024-10-07 11:31:46.020552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.354 [2024-10-07 11:31:46.020583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.354 [2024-10-07 11:31:46.020601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.354 [2024-10-07 11:31:46.020615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.354 [2024-10-07 11:31:46.020628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.354 [2024-10-07 11:31:46.020660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.354 [2024-10-07 11:31:46.020677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.354 [2024-10-07 11:31:46.030513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.354 [2024-10-07 11:31:46.030563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.354 [2024-10-07 11:31:46.030656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.354 [2024-10-07 11:31:46.030687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.354 [2024-10-07 11:31:46.030704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.354 [2024-10-07 11:31:46.030751] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.354 [2024-10-07 11:31:46.030775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.354 [2024-10-07 11:31:46.030790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.354 [2024-10-07 11:31:46.031051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.354 [2024-10-07 11:31:46.031083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.354 [2024-10-07 11:31:46.031217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.354 [2024-10-07 11:31:46.031243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.354 [2024-10-07 11:31:46.031258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.354 [2024-10-07 11:31:46.031275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.354 [2024-10-07 11:31:46.031289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.354 [2024-10-07 11:31:46.031302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.354 [2024-10-07 11:31:46.031424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.354 [2024-10-07 11:31:46.031447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.354 [2024-10-07 11:31:46.040635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.354 [2024-10-07 11:31:46.040709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.354 [2024-10-07 11:31:46.040787] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.354 [2024-10-07 11:31:46.040814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.354 [2024-10-07 11:31:46.040831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.354 [2024-10-07 11:31:46.040895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.354 [2024-10-07 11:31:46.040922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.354 [2024-10-07 11:31:46.040955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.354 [2024-10-07 11:31:46.040975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.354 [2024-10-07 11:31:46.041724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.354 [2024-10-07 11:31:46.041767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.354 [2024-10-07 11:31:46.041785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.354 [2024-10-07 11:31:46.041799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.354 [2024-10-07 11:31:46.041970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.354 [2024-10-07 11:31:46.041995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.354 [2024-10-07 11:31:46.042010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.354 [2024-10-07 11:31:46.042023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.354 [2024-10-07 11:31:46.042132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.354 [2024-10-07 11:31:46.051285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.354 [2024-10-07 11:31:46.051345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.354 [2024-10-07 11:31:46.051440] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.354 [2024-10-07 11:31:46.051472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.354 [2024-10-07 11:31:46.051489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.354 [2024-10-07 11:31:46.051536] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.354 [2024-10-07 11:31:46.051559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.354 [2024-10-07 11:31:46.051575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.354 [2024-10-07 11:31:46.051607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.354 [2024-10-07 11:31:46.051630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.354 [2024-10-07 11:31:46.051657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.354 [2024-10-07 11:31:46.051674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.354 [2024-10-07 11:31:46.051688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.354 [2024-10-07 11:31:46.051705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.354 [2024-10-07 11:31:46.051719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.354 [2024-10-07 11:31:46.051732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.354 [2024-10-07 11:31:46.051761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.354 [2024-10-07 11:31:46.051778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.354 [2024-10-07 11:31:46.061740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.354 [2024-10-07 11:31:46.061809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.354 [2024-10-07 11:31:46.061904] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.354 [2024-10-07 11:31:46.061935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.354 [2024-10-07 11:31:46.061952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.354 [2024-10-07 11:31:46.062000] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.354 [2024-10-07 11:31:46.062022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.354 [2024-10-07 11:31:46.062037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.354 [2024-10-07 11:31:46.062069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.354 [2024-10-07 11:31:46.062092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.354 [2024-10-07 11:31:46.062120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.354 [2024-10-07 11:31:46.062137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.354 [2024-10-07 11:31:46.062152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.354 [2024-10-07 11:31:46.062167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.354 [2024-10-07 11:31:46.062182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.354 [2024-10-07 11:31:46.062194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.354 [2024-10-07 11:31:46.062225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.354 [2024-10-07 11:31:46.062242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.354 [2024-10-07 11:31:46.071991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.354 [2024-10-07 11:31:46.072040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.354 [2024-10-07 11:31:46.072132] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.354 [2024-10-07 11:31:46.072162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.355 [2024-10-07 11:31:46.072179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.355 [2024-10-07 11:31:46.072227] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.355 [2024-10-07 11:31:46.072250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.355 [2024-10-07 11:31:46.072265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.355 [2024-10-07 11:31:46.072545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.355 [2024-10-07 11:31:46.072578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.355 [2024-10-07 11:31:46.072714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.355 [2024-10-07 11:31:46.072739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.355 [2024-10-07 11:31:46.072754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.355 [2024-10-07 11:31:46.072788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.355 [2024-10-07 11:31:46.072804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.355 [2024-10-07 11:31:46.072817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.355 [2024-10-07 11:31:46.072924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.355 [2024-10-07 11:31:46.072944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.355 [2024-10-07 11:31:46.082116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.355 [2024-10-07 11:31:46.082190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.355 [2024-10-07 11:31:46.082272] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.355 [2024-10-07 11:31:46.082331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.355 [2024-10-07 11:31:46.082352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.355 [2024-10-07 11:31:46.082421] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.355 [2024-10-07 11:31:46.082449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.355 [2024-10-07 11:31:46.082465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.355 [2024-10-07 11:31:46.082483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.355 [2024-10-07 11:31:46.083212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.355 [2024-10-07 11:31:46.083252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.355 [2024-10-07 11:31:46.083270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.355 [2024-10-07 11:31:46.083284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.355 [2024-10-07 11:31:46.083470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.355 [2024-10-07 11:31:46.083497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.355 [2024-10-07 11:31:46.083511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.355 [2024-10-07 11:31:46.083525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.355 [2024-10-07 11:31:46.083615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.355 [2024-10-07 11:31:46.092681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.355 [2024-10-07 11:31:46.092732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.355 [2024-10-07 11:31:46.092825] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.355 [2024-10-07 11:31:46.092855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.355 [2024-10-07 11:31:46.092872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.355 [2024-10-07 11:31:46.092920] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.355 [2024-10-07 11:31:46.092943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.355 [2024-10-07 11:31:46.092958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.355 [2024-10-07 11:31:46.093009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.355 [2024-10-07 11:31:46.093033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.355 [2024-10-07 11:31:46.093060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.355 [2024-10-07 11:31:46.093078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.355 [2024-10-07 11:31:46.093093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.355 [2024-10-07 11:31:46.093109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.355 [2024-10-07 11:31:46.093123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.355 [2024-10-07 11:31:46.093136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.355 [2024-10-07 11:31:46.093166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.355 [2024-10-07 11:31:46.093183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.355 [2024-10-07 11:31:46.103176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.355 [2024-10-07 11:31:46.103227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.355 [2024-10-07 11:31:46.103333] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.355 [2024-10-07 11:31:46.103365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.355 [2024-10-07 11:31:46.103382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.355 [2024-10-07 11:31:46.103430] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.355 [2024-10-07 11:31:46.103453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.355 [2024-10-07 11:31:46.103468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.355 [2024-10-07 11:31:46.103500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.355 [2024-10-07 11:31:46.103523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.355 [2024-10-07 11:31:46.103550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.355 [2024-10-07 11:31:46.103568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.355 [2024-10-07 11:31:46.103582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.355 [2024-10-07 11:31:46.103598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.355 [2024-10-07 11:31:46.103612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.355 [2024-10-07 11:31:46.103625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.355 [2024-10-07 11:31:46.103654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.355 [2024-10-07 11:31:46.103671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.355 [2024-10-07 11:31:46.113451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.355 [2024-10-07 11:31:46.113501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.355 [2024-10-07 11:31:46.113834] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.355 [2024-10-07 11:31:46.113878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.355 [2024-10-07 11:31:46.113898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.355 [2024-10-07 11:31:46.113950] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.355 [2024-10-07 11:31:46.113972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.355 [2024-10-07 11:31:46.113988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.355 [2024-10-07 11:31:46.114125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.355 [2024-10-07 11:31:46.114154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.355 [2024-10-07 11:31:46.114257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.355 [2024-10-07 11:31:46.114278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.355 [2024-10-07 11:31:46.114307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.355 [2024-10-07 11:31:46.114343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.355 [2024-10-07 11:31:46.114360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.355 [2024-10-07 11:31:46.114373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.355 [2024-10-07 11:31:46.114414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.355 [2024-10-07 11:31:46.114433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.355 [2024-10-07 11:31:46.123590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.355 [2024-10-07 11:31:46.123664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.355 [2024-10-07 11:31:46.123743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.355 [2024-10-07 11:31:46.123771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.355 [2024-10-07 11:31:46.123788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.355 [2024-10-07 11:31:46.123852] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.355 [2024-10-07 11:31:46.123878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.355 [2024-10-07 11:31:46.123895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.355 [2024-10-07 11:31:46.123913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.355 [2024-10-07 11:31:46.124654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.356 [2024-10-07 11:31:46.124695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.356 [2024-10-07 11:31:46.124713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.356 [2024-10-07 11:31:46.124727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.356 [2024-10-07 11:31:46.124898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.356 [2024-10-07 11:31:46.124922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.356 [2024-10-07 11:31:46.124953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.356 [2024-10-07 11:31:46.124967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.356 [2024-10-07 11:31:46.125058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.356 [2024-10-07 11:31:46.134025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.356 [2024-10-07 11:31:46.134074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.356 [2024-10-07 11:31:46.134167] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.356 [2024-10-07 11:31:46.134197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.356 [2024-10-07 11:31:46.134214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.356 [2024-10-07 11:31:46.134262] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.356 [2024-10-07 11:31:46.134296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.356 [2024-10-07 11:31:46.134314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.356 [2024-10-07 11:31:46.134366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.356 [2024-10-07 11:31:46.134390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.356 [2024-10-07 11:31:46.134417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.356 [2024-10-07 11:31:46.134435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.356 [2024-10-07 11:31:46.134449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.356 [2024-10-07 11:31:46.134465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.356 [2024-10-07 11:31:46.134479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.356 [2024-10-07 11:31:46.134492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.356 [2024-10-07 11:31:46.134521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.356 [2024-10-07 11:31:46.134538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.356 [2024-10-07 11:31:46.144518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.356 [2024-10-07 11:31:46.144569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.356 [2024-10-07 11:31:46.144661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.356 [2024-10-07 11:31:46.144692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.356 [2024-10-07 11:31:46.144709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.356 [2024-10-07 11:31:46.144756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.356 [2024-10-07 11:31:46.144778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.356 [2024-10-07 11:31:46.144794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.356 [2024-10-07 11:31:46.144825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.356 [2024-10-07 11:31:46.144865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.356 [2024-10-07 11:31:46.144895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.356 [2024-10-07 11:31:46.144913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.356 [2024-10-07 11:31:46.144927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.356 [2024-10-07 11:31:46.144943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.356 [2024-10-07 11:31:46.144957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.356 [2024-10-07 11:31:46.144970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.356 [2024-10-07 11:31:46.145001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.356 [2024-10-07 11:31:46.145018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.356 [2024-10-07 11:31:46.154779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.356 [2024-10-07 11:31:46.154829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.356 [2024-10-07 11:31:46.155153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.356 [2024-10-07 11:31:46.155196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.356 [2024-10-07 11:31:46.155216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.356 [2024-10-07 11:31:46.155268] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.356 [2024-10-07 11:31:46.155291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.356 [2024-10-07 11:31:46.155306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.356 [2024-10-07 11:31:46.155460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.356 [2024-10-07 11:31:46.155490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.356 [2024-10-07 11:31:46.155594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.356 [2024-10-07 11:31:46.155615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.356 [2024-10-07 11:31:46.155630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.356 [2024-10-07 11:31:46.155647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.356 [2024-10-07 11:31:46.155661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.356 [2024-10-07 11:31:46.155675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.356 [2024-10-07 11:31:46.155712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.356 [2024-10-07 11:31:46.155731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.356 [2024-10-07 11:31:46.164898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.356 [2024-10-07 11:31:46.164973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.356 [2024-10-07 11:31:46.165051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.356 [2024-10-07 11:31:46.165078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.356 [2024-10-07 11:31:46.165112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.356 [2024-10-07 11:31:46.165896] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.356 [2024-10-07 11:31:46.165938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.356 [2024-10-07 11:31:46.165957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.356 [2024-10-07 11:31:46.165976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.356 [2024-10-07 11:31:46.166165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.356 [2024-10-07 11:31:46.166195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.356 [2024-10-07 11:31:46.166210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.356 [2024-10-07 11:31:46.166223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.356 [2024-10-07 11:31:46.166349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.356 [2024-10-07 11:31:46.166373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.356 [2024-10-07 11:31:46.166387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.356 [2024-10-07 11:31:46.166401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.356 [2024-10-07 11:31:46.166432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.356 [2024-10-07 11:31:46.175388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.356 [2024-10-07 11:31:46.175438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.356 [2024-10-07 11:31:46.175529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.356 [2024-10-07 11:31:46.175560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.356 [2024-10-07 11:31:46.175577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.356 [2024-10-07 11:31:46.175625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.356 [2024-10-07 11:31:46.175648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.356 [2024-10-07 11:31:46.175663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.356 [2024-10-07 11:31:46.175695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.356 [2024-10-07 11:31:46.175719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.356 [2024-10-07 11:31:46.175746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.357 [2024-10-07 11:31:46.175763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.357 [2024-10-07 11:31:46.175777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.357 [2024-10-07 11:31:46.175793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.357 [2024-10-07 11:31:46.175808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.357 [2024-10-07 11:31:46.175837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.357 [2024-10-07 11:31:46.175871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.357 [2024-10-07 11:31:46.175889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.357 [2024-10-07 11:31:46.185867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.357 [2024-10-07 11:31:46.185919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.357 [2024-10-07 11:31:46.186015] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.357 [2024-10-07 11:31:46.186047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.357 [2024-10-07 11:31:46.186063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.357 [2024-10-07 11:31:46.186113] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.357 [2024-10-07 11:31:46.186136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.357 [2024-10-07 11:31:46.186151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.357 [2024-10-07 11:31:46.186183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.357 [2024-10-07 11:31:46.186206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.357 [2024-10-07 11:31:46.186233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.357 [2024-10-07 11:31:46.186250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.357 [2024-10-07 11:31:46.186264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.357 [2024-10-07 11:31:46.186280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.357 [2024-10-07 11:31:46.186309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.357 [2024-10-07 11:31:46.186349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.357 [2024-10-07 11:31:46.186383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.357 [2024-10-07 11:31:46.186401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.357 [2024-10-07 11:31:46.196033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.357 [2024-10-07 11:31:46.196084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.357 [2024-10-07 11:31:46.196409] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.357 [2024-10-07 11:31:46.196452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.357 [2024-10-07 11:31:46.196472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.357 [2024-10-07 11:31:46.196523] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.357 [2024-10-07 11:31:46.196547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.357 [2024-10-07 11:31:46.196562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.357 [2024-10-07 11:31:46.196689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.357 [2024-10-07 11:31:46.196717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.357 [2024-10-07 11:31:46.196839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.357 [2024-10-07 11:31:46.196860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.357 [2024-10-07 11:31:46.196874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.357 [2024-10-07 11:31:46.196891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.357 [2024-10-07 11:31:46.196905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.357 [2024-10-07 11:31:46.196918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.357 [2024-10-07 11:31:46.196956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.357 [2024-10-07 11:31:46.196975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.357 [2024-10-07 11:31:46.206154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.357 [2024-10-07 11:31:46.206229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.357 [2024-10-07 11:31:46.206333] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.357 [2024-10-07 11:31:46.206365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.357 [2024-10-07 11:31:46.206382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.357 [2024-10-07 11:31:46.207149] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.357 [2024-10-07 11:31:46.207192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.357 [2024-10-07 11:31:46.207211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.357 [2024-10-07 11:31:46.207231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.357 [2024-10-07 11:31:46.207416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.357 [2024-10-07 11:31:46.207446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.357 [2024-10-07 11:31:46.207461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.357 [2024-10-07 11:31:46.207476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.357 [2024-10-07 11:31:46.207568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.357 [2024-10-07 11:31:46.207588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.357 [2024-10-07 11:31:46.207602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.357 [2024-10-07 11:31:46.207616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.357 [2024-10-07 11:31:46.207662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.357 [2024-10-07 11:31:46.216564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.357 [2024-10-07 11:31:46.216615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.357 [2024-10-07 11:31:46.216705] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.357 [2024-10-07 11:31:46.216735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.357 [2024-10-07 11:31:46.216752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.357 [2024-10-07 11:31:46.216819] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.357 [2024-10-07 11:31:46.216843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.357 [2024-10-07 11:31:46.216859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.357 [2024-10-07 11:31:46.216892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.357 [2024-10-07 11:31:46.216915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.357 [2024-10-07 11:31:46.216942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.357 [2024-10-07 11:31:46.216959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.357 [2024-10-07 11:31:46.216973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.357 [2024-10-07 11:31:46.216989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.357 [2024-10-07 11:31:46.217003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.357 [2024-10-07 11:31:46.217016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.357 [2024-10-07 11:31:46.217046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.357 [2024-10-07 11:31:46.217063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.357 [2024-10-07 11:31:46.226996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.357 [2024-10-07 11:31:46.227048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.357 [2024-10-07 11:31:46.227141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.357 [2024-10-07 11:31:46.227171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.357 [2024-10-07 11:31:46.227188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.357 [2024-10-07 11:31:46.227235] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.357 [2024-10-07 11:31:46.227258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.357 [2024-10-07 11:31:46.227274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.357 [2024-10-07 11:31:46.227305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.357 [2024-10-07 11:31:46.227345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.357 [2024-10-07 11:31:46.227374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.357 [2024-10-07 11:31:46.227392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.357 [2024-10-07 11:31:46.227406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.357 [2024-10-07 11:31:46.227422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.357 [2024-10-07 11:31:46.227436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.357 [2024-10-07 11:31:46.227449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.357 [2024-10-07 11:31:46.227479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.357 [2024-10-07 11:31:46.227509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.357 [2024-10-07 11:31:46.237124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.357 [2024-10-07 11:31:46.237198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.357 [2024-10-07 11:31:46.237278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.357 [2024-10-07 11:31:46.237307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.357 [2024-10-07 11:31:46.237339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.357 [2024-10-07 11:31:46.237631] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.357 [2024-10-07 11:31:46.237672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.358 [2024-10-07 11:31:46.237692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.358 [2024-10-07 11:31:46.237711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.358 [2024-10-07 11:31:46.237878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.358 [2024-10-07 11:31:46.237914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.358 [2024-10-07 11:31:46.237931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.358 [2024-10-07 11:31:46.237944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.358 [2024-10-07 11:31:46.238052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.358 [2024-10-07 11:31:46.238072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.358 [2024-10-07 11:31:46.238086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.358 [2024-10-07 11:31:46.238099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.358 [2024-10-07 11:31:46.238135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.358 [2024-10-07 11:31:46.247216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.358 [2024-10-07 11:31:46.247345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.358 [2024-10-07 11:31:46.247377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.358 [2024-10-07 11:31:46.247394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.358 [2024-10-07 11:31:46.247441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.358 [2024-10-07 11:31:46.248185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.358 [2024-10-07 11:31:46.248238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.358 [2024-10-07 11:31:46.248256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.358 [2024-10-07 11:31:46.248270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.358 [2024-10-07 11:31:46.248452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.358 [2024-10-07 11:31:46.248519] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.358 [2024-10-07 11:31:46.248562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.358 [2024-10-07 11:31:46.248580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.358 [2024-10-07 11:31:46.248674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.358 [2024-10-07 11:31:46.248710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.358 [2024-10-07 11:31:46.248728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.358 [2024-10-07 11:31:46.248742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.358 [2024-10-07 11:31:46.248771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.358 [2024-10-07 11:31:46.257573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.358 [2024-10-07 11:31:46.257688] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.358 [2024-10-07 11:31:46.257720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.358 [2024-10-07 11:31:46.257737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.358 [2024-10-07 11:31:46.257768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.358 [2024-10-07 11:31:46.257800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.358 [2024-10-07 11:31:46.257817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.358 [2024-10-07 11:31:46.257831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.358 [2024-10-07 11:31:46.257862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.358 [2024-10-07 11:31:46.258266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.358 [2024-10-07 11:31:46.258386] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.358 [2024-10-07 11:31:46.258416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.358 [2024-10-07 11:31:46.258432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.358 [2024-10-07 11:31:46.258464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.358 [2024-10-07 11:31:46.258495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.358 [2024-10-07 11:31:46.258513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.358 [2024-10-07 11:31:46.258527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.358 [2024-10-07 11:31:46.259689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.358 [2024-10-07 11:31:46.268016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.358 [2024-10-07 11:31:46.268131] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.358 [2024-10-07 11:31:46.268162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.358 [2024-10-07 11:31:46.268179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.358 [2024-10-07 11:31:46.268210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.358 [2024-10-07 11:31:46.268259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.358 [2024-10-07 11:31:46.268279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.358 [2024-10-07 11:31:46.268293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.358 [2024-10-07 11:31:46.268340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.358 [2024-10-07 11:31:46.268395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.358 [2024-10-07 11:31:46.268478] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.358 [2024-10-07 11:31:46.268506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.358 [2024-10-07 11:31:46.268523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.358 [2024-10-07 11:31:46.268553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.358 [2024-10-07 11:31:46.268584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.358 [2024-10-07 11:31:46.268602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.358 [2024-10-07 11:31:46.268616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.358 [2024-10-07 11:31:46.268645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.358 [2024-10-07 11:31:46.278167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.358 [2024-10-07 11:31:46.278294] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.358 [2024-10-07 11:31:46.278341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.358 [2024-10-07 11:31:46.278361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.358 [2024-10-07 11:31:46.278614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.358 [2024-10-07 11:31:46.278791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.358 [2024-10-07 11:31:46.278827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.358 [2024-10-07 11:31:46.278845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.358 [2024-10-07 11:31:46.278955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.358 [2024-10-07 11:31:46.278980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.358 [2024-10-07 11:31:46.279064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.358 [2024-10-07 11:31:46.279093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.358 [2024-10-07 11:31:46.279110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.358 [2024-10-07 11:31:46.279141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.358 [2024-10-07 11:31:46.279173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.358 [2024-10-07 11:31:46.279191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.358 [2024-10-07 11:31:46.279205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.358 [2024-10-07 11:31:46.279234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.358 [2024-10-07 11:31:46.288260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.358 [2024-10-07 11:31:46.288392] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.358 [2024-10-07 11:31:46.288424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.358 [2024-10-07 11:31:46.288441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.358 [2024-10-07 11:31:46.289163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.358 [2024-10-07 11:31:46.289400] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.358 [2024-10-07 11:31:46.289453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.358 [2024-10-07 11:31:46.289472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.358 [2024-10-07 11:31:46.289569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.358 [2024-10-07 11:31:46.289595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.358 [2024-10-07 11:31:46.289675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.358 [2024-10-07 11:31:46.289705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.358 [2024-10-07 11:31:46.289722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.358 [2024-10-07 11:31:46.289753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.358 [2024-10-07 11:31:46.289785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.358 [2024-10-07 11:31:46.289803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.358 [2024-10-07 11:31:46.289817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.358 [2024-10-07 11:31:46.291091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.358 [2024-10-07 11:31:46.299138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.358 [2024-10-07 11:31:46.299332] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.358 [2024-10-07 11:31:46.299394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.358 [2024-10-07 11:31:46.299428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.358 [2024-10-07 11:31:46.299488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.359 [2024-10-07 11:31:46.301100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.359 [2024-10-07 11:31:46.301162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.359 [2024-10-07 11:31:46.301195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.359 [2024-10-07 11:31:46.301546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.359 [2024-10-07 11:31:46.302574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.359 [2024-10-07 11:31:46.303702] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.359 [2024-10-07 11:31:46.303769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.359 [2024-10-07 11:31:46.303829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.359 [2024-10-07 11:31:46.304066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.359 [2024-10-07 11:31:46.304249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.359 [2024-10-07 11:31:46.304296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.359 [2024-10-07 11:31:46.304348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.359 [2024-10-07 11:31:46.305979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.359 [2024-10-07 11:31:46.309883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.359 [2024-10-07 11:31:46.310010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.359 [2024-10-07 11:31:46.310049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.359 [2024-10-07 11:31:46.310067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.359 [2024-10-07 11:31:46.310904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.359 [2024-10-07 11:31:46.311147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.359 [2024-10-07 11:31:46.311184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.359 [2024-10-07 11:31:46.311202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.359 [2024-10-07 11:31:46.311294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.359 [2024-10-07 11:31:46.312692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.359 [2024-10-07 11:31:46.312804] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.359 [2024-10-07 11:31:46.312836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.359 [2024-10-07 11:31:46.312853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.359 [2024-10-07 11:31:46.312887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.359 [2024-10-07 11:31:46.312919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.359 [2024-10-07 11:31:46.312937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.359 [2024-10-07 11:31:46.312951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.359 [2024-10-07 11:31:46.312981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.359 [2024-10-07 11:31:46.320277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.359 [2024-10-07 11:31:46.320456] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.359 [2024-10-07 11:31:46.320500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.359 [2024-10-07 11:31:46.320528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.359 [2024-10-07 11:31:46.320576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.359 [2024-10-07 11:31:46.320622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.359 [2024-10-07 11:31:46.320651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.359 [2024-10-07 11:31:46.320705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.359 [2024-10-07 11:31:46.320755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.359 [2024-10-07 11:31:46.323496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.359 [2024-10-07 11:31:46.323671] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.359 [2024-10-07 11:31:46.323739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.359 [2024-10-07 11:31:46.323770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.359 [2024-10-07 11:31:46.324793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.359 [2024-10-07 11:31:46.325038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.359 [2024-10-07 11:31:46.325078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.359 [2024-10-07 11:31:46.325096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.359 [2024-10-07 11:31:46.326035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.359 [2024-10-07 11:31:46.330656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.359 [2024-10-07 11:31:46.330778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.359 [2024-10-07 11:31:46.330811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.359 [2024-10-07 11:31:46.330828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.359 [2024-10-07 11:31:46.330860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.359 [2024-10-07 11:31:46.330893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.359 [2024-10-07 11:31:46.330911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.359 [2024-10-07 11:31:46.330925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.359 [2024-10-07 11:31:46.330956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.359 [2024-10-07 11:31:46.333643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.359 [2024-10-07 11:31:46.333759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.359 [2024-10-07 11:31:46.333791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.359 [2024-10-07 11:31:46.333808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.359 [2024-10-07 11:31:46.333840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.359 [2024-10-07 11:31:46.333872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.359 [2024-10-07 11:31:46.333890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.359 [2024-10-07 11:31:46.333904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.359 [2024-10-07 11:31:46.333935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.359 [2024-10-07 11:31:46.340750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.359 [2024-10-07 11:31:46.340885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.359 [2024-10-07 11:31:46.340917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.359 [2024-10-07 11:31:46.340935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.359 [2024-10-07 11:31:46.342101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.359 [2024-10-07 11:31:46.342385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.359 [2024-10-07 11:31:46.342424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.359 [2024-10-07 11:31:46.342442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.359 [2024-10-07 11:31:46.343168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.359 [2024-10-07 11:31:46.344394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.359 [2024-10-07 11:31:46.344504] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.359 [2024-10-07 11:31:46.344535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.359 [2024-10-07 11:31:46.344552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.359 [2024-10-07 11:31:46.344584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.359 [2024-10-07 11:31:46.344616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.359 [2024-10-07 11:31:46.344635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.359 [2024-10-07 11:31:46.344649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.359 [2024-10-07 11:31:46.344679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.359 [2024-10-07 11:31:46.350856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.359 [2024-10-07 11:31:46.350971] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.359 [2024-10-07 11:31:46.351002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.359 [2024-10-07 11:31:46.351027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.359 [2024-10-07 11:31:46.351059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.359 [2024-10-07 11:31:46.351091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.359 [2024-10-07 11:31:46.351109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.359 [2024-10-07 11:31:46.351123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.359 [2024-10-07 11:31:46.351154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.359 [2024-10-07 11:31:46.354486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.359 [2024-10-07 11:31:46.354599] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.359 [2024-10-07 11:31:46.354629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.359 [2024-10-07 11:31:46.354646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.359 [2024-10-07 11:31:46.355827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.359 [2024-10-07 11:31:46.356091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.359 [2024-10-07 11:31:46.356128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.359 [2024-10-07 11:31:46.356145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.359 [2024-10-07 11:31:46.356888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.359 [2024-10-07 11:31:46.361217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.359 [2024-10-07 11:31:46.361356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.359 [2024-10-07 11:31:46.361389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.359 [2024-10-07 11:31:46.361406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.359 [2024-10-07 11:31:46.361439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.359 [2024-10-07 11:31:46.361471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.359 [2024-10-07 11:31:46.361489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.359 [2024-10-07 11:31:46.361503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.360 [2024-10-07 11:31:46.361534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.360 [2024-10-07 11:31:46.364574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.360 [2024-10-07 11:31:46.364686] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.360 [2024-10-07 11:31:46.364717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.360 [2024-10-07 11:31:46.364734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.360 [2024-10-07 11:31:46.364765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.360 [2024-10-07 11:31:46.364797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.360 [2024-10-07 11:31:46.364816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.360 [2024-10-07 11:31:46.364830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.360 [2024-10-07 11:31:46.364859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.360 [2024-10-07 11:31:46.372040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.360 [2024-10-07 11:31:46.372156] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.360 [2024-10-07 11:31:46.372187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.360 [2024-10-07 11:31:46.372204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.360 [2024-10-07 11:31:46.372251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.360 [2024-10-07 11:31:46.372289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.360 [2024-10-07 11:31:46.372308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.360 [2024-10-07 11:31:46.372368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.360 [2024-10-07 11:31:46.372407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.360 [2024-10-07 11:31:46.375060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.360 [2024-10-07 11:31:46.375183] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.360 [2024-10-07 11:31:46.375214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.360 [2024-10-07 11:31:46.375231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.360 [2024-10-07 11:31:46.375264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.360 [2024-10-07 11:31:46.375295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.360 [2024-10-07 11:31:46.375327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.360 [2024-10-07 11:31:46.375345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.360 [2024-10-07 11:31:46.375386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.360 [2024-10-07 11:31:46.382134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.360 [2024-10-07 11:31:46.382251] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.360 [2024-10-07 11:31:46.382283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.360 [2024-10-07 11:31:46.382339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.360 [2024-10-07 11:31:46.383509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.360 [2024-10-07 11:31:46.383739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.360 [2024-10-07 11:31:46.383775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.360 [2024-10-07 11:31:46.383793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.360 [2024-10-07 11:31:46.384535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.360 [2024-10-07 11:31:46.385905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.360 [2024-10-07 11:31:46.386015] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.360 [2024-10-07 11:31:46.386047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.360 [2024-10-07 11:31:46.386065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.360 [2024-10-07 11:31:46.386097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.360 [2024-10-07 11:31:46.386129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.360 [2024-10-07 11:31:46.386148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.360 [2024-10-07 11:31:46.386162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.360 [2024-10-07 11:31:46.386192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.360 [2024-10-07 11:31:46.392224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.360 [2024-10-07 11:31:46.392367] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.360 [2024-10-07 11:31:46.392416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.360 [2024-10-07 11:31:46.392436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.360 [2024-10-07 11:31:46.392469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.360 [2024-10-07 11:31:46.392501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.360 [2024-10-07 11:31:46.392519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.360 [2024-10-07 11:31:46.392534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.360 [2024-10-07 11:31:46.392564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.360 [2024-10-07 11:31:46.395991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.360 [2024-10-07 11:31:46.396108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.360 [2024-10-07 11:31:46.396140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.360 [2024-10-07 11:31:46.396158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.360 [2024-10-07 11:31:46.396190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.360 [2024-10-07 11:31:46.396222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.360 [2024-10-07 11:31:46.396240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.360 [2024-10-07 11:31:46.396255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.360 [2024-10-07 11:31:46.397440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.360 [2024-10-07 11:31:46.402351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.360 [2024-10-07 11:31:46.402465] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.360 [2024-10-07 11:31:46.402497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.360 [2024-10-07 11:31:46.402515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.360 [2024-10-07 11:31:46.402766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.360 [2024-10-07 11:31:46.402926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.360 [2024-10-07 11:31:46.402960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.360 [2024-10-07 11:31:46.402978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.360 [2024-10-07 11:31:46.403086] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.360 [2024-10-07 11:31:46.406091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.360 [2024-10-07 11:31:46.406213] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.360 [2024-10-07 11:31:46.406245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.360 [2024-10-07 11:31:46.406262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.360 [2024-10-07 11:31:46.406309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.360 [2024-10-07 11:31:46.406392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.360 [2024-10-07 11:31:46.406425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.360 [2024-10-07 11:31:46.406441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.360 [2024-10-07 11:31:46.406473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.360 [2024-10-07 11:31:46.412443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.360 [2024-10-07 11:31:46.412561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.360 [2024-10-07 11:31:46.412592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.360 [2024-10-07 11:31:46.412610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.360 [2024-10-07 11:31:46.412642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.360 [2024-10-07 11:31:46.412674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.360 [2024-10-07 11:31:46.412692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.360 [2024-10-07 11:31:46.412706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.360 [2024-10-07 11:31:46.412736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.360 [2024-10-07 11:31:46.416244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.360 [2024-10-07 11:31:46.416372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.360 [2024-10-07 11:31:46.416404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.360 [2024-10-07 11:31:46.416421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.360 [2024-10-07 11:31:46.416676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.360 [2024-10-07 11:31:46.416835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.360 [2024-10-07 11:31:46.416870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.360 [2024-10-07 11:31:46.416887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.360 [2024-10-07 11:31:46.416995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.360 [2024-10-07 11:31:46.423124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.361 [2024-10-07 11:31:46.423240] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.361 [2024-10-07 11:31:46.423271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.361 [2024-10-07 11:31:46.423289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.361 [2024-10-07 11:31:46.423334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.361 [2024-10-07 11:31:46.423370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.361 [2024-10-07 11:31:46.423388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.361 [2024-10-07 11:31:46.423402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.361 [2024-10-07 11:31:46.423432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.361 [2024-10-07 11:31:46.426352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.361 [2024-10-07 11:31:46.426465] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.361 [2024-10-07 11:31:46.426497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.361 [2024-10-07 11:31:46.426514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.361 [2024-10-07 11:31:46.426546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.361 [2024-10-07 11:31:46.426578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.361 [2024-10-07 11:31:46.426596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.361 [2024-10-07 11:31:46.426610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.361 [2024-10-07 11:31:46.426640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.361 [2024-10-07 11:31:46.433654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.361 [2024-10-07 11:31:46.433769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.361 [2024-10-07 11:31:46.433800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.361 [2024-10-07 11:31:46.433817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.361 [2024-10-07 11:31:46.433850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.361 [2024-10-07 11:31:46.433882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.361 [2024-10-07 11:31:46.433900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.361 [2024-10-07 11:31:46.433914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.361 [2024-10-07 11:31:46.433944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.361 [2024-10-07 11:31:46.436964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.361 [2024-10-07 11:31:46.437077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.361 [2024-10-07 11:31:46.437108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.361 [2024-10-07 11:31:46.437125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.361 [2024-10-07 11:31:46.437157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.361 [2024-10-07 11:31:46.437189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.361 [2024-10-07 11:31:46.437207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.361 [2024-10-07 11:31:46.437222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.361 [2024-10-07 11:31:46.437252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.361 [2024-10-07 11:31:46.443983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.361 [2024-10-07 11:31:46.444106] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.361 [2024-10-07 11:31:46.444137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.361 [2024-10-07 11:31:46.444169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.361 [2024-10-07 11:31:46.444440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.361 [2024-10-07 11:31:46.444602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.361 [2024-10-07 11:31:46.444636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.361 [2024-10-07 11:31:46.444653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.361 [2024-10-07 11:31:46.444762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.361 [2024-10-07 11:31:46.447531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.361 [2024-10-07 11:31:46.447641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.361 [2024-10-07 11:31:46.447672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.361 [2024-10-07 11:31:46.447688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.361 [2024-10-07 11:31:46.447720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.361 [2024-10-07 11:31:46.447752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.361 [2024-10-07 11:31:46.447770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.361 [2024-10-07 11:31:46.447784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.361 [2024-10-07 11:31:46.447814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.361 [2024-10-07 11:31:46.454074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.361 [2024-10-07 11:31:46.454188] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.361 [2024-10-07 11:31:46.454219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.361 [2024-10-07 11:31:46.454236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.361 [2024-10-07 11:31:46.454277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.361 [2024-10-07 11:31:46.454343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.361 [2024-10-07 11:31:46.454364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.361 [2024-10-07 11:31:46.454379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.361 [2024-10-07 11:31:46.454409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.361 [2024-10-07 11:31:46.457813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.361 [2024-10-07 11:31:46.457926] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.361 [2024-10-07 11:31:46.457957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.361 [2024-10-07 11:31:46.457974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.361 [2024-10-07 11:31:46.458251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.361 [2024-10-07 11:31:46.458431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.361 [2024-10-07 11:31:46.458484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.361 [2024-10-07 11:31:46.458504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.361 [2024-10-07 11:31:46.458613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.361 [2024-10-07 11:31:46.464646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.361 [2024-10-07 11:31:46.464761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.361 [2024-10-07 11:31:46.464793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.361 [2024-10-07 11:31:46.464810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.361 [2024-10-07 11:31:46.464842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.361 [2024-10-07 11:31:46.464874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.361 [2024-10-07 11:31:46.464892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.361 [2024-10-07 11:31:46.464906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.361 [2024-10-07 11:31:46.464937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.361 [2024-10-07 11:31:46.467902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.361 [2024-10-07 11:31:46.468014] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.361 [2024-10-07 11:31:46.468045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.361 [2024-10-07 11:31:46.468062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.361 [2024-10-07 11:31:46.468094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.361 [2024-10-07 11:31:46.468125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.361 [2024-10-07 11:31:46.468143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.361 [2024-10-07 11:31:46.468158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.361 [2024-10-07 11:31:46.468187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.361 [2024-10-07 11:31:46.475153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.361 [2024-10-07 11:31:46.475266] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.361 [2024-10-07 11:31:46.475297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.361 [2024-10-07 11:31:46.475328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.361 [2024-10-07 11:31:46.475364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.361 [2024-10-07 11:31:46.475396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.361 [2024-10-07 11:31:46.475414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.361 [2024-10-07 11:31:46.475428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.361 [2024-10-07 11:31:46.475459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.361 [2024-10-07 11:31:46.478440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.361 [2024-10-07 11:31:46.478569] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.361 [2024-10-07 11:31:46.478601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.361 [2024-10-07 11:31:46.478618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.361 [2024-10-07 11:31:46.478650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.361 [2024-10-07 11:31:46.478682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.361 [2024-10-07 11:31:46.478706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.361 [2024-10-07 11:31:46.478722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.361 [2024-10-07 11:31:46.478752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.361 [2024-10-07 11:31:46.485395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.361 [2024-10-07 11:31:46.485512] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.361 [2024-10-07 11:31:46.485543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.361 [2024-10-07 11:31:46.485560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.361 [2024-10-07 11:31:46.485811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.361 [2024-10-07 11:31:46.485970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.361 [2024-10-07 11:31:46.486004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.362 [2024-10-07 11:31:46.486021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.362 [2024-10-07 11:31:46.486128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.362 [2024-10-07 11:31:46.488922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.362 [2024-10-07 11:31:46.489033] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.362 [2024-10-07 11:31:46.489064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.362 [2024-10-07 11:31:46.489081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.362 [2024-10-07 11:31:46.489113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.362 [2024-10-07 11:31:46.489145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.362 [2024-10-07 11:31:46.489163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.362 [2024-10-07 11:31:46.489177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.362 [2024-10-07 11:31:46.489207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.362 [2024-10-07 11:31:46.495486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.362 [2024-10-07 11:31:46.495600] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.362 [2024-10-07 11:31:46.495632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.362 [2024-10-07 11:31:46.495649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.362 [2024-10-07 11:31:46.495700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.362 [2024-10-07 11:31:46.496444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.362 [2024-10-07 11:31:46.496482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.362 [2024-10-07 11:31:46.496500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.362 [2024-10-07 11:31:46.496694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.362 8851.30 IOPS, 34.58 MiB/s [2024-10-07T11:31:52.885Z] [2024-10-07 11:31:46.503055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.362 [2024-10-07 11:31:46.503257] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.362 [2024-10-07 11:31:46.503301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.362 [2024-10-07 11:31:46.503336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.362 [2024-10-07 11:31:46.503372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.362 [2024-10-07 11:31:46.503405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.362 [2024-10-07 11:31:46.503424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.362 [2024-10-07 11:31:46.503438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.362 [2024-10-07 11:31:46.503469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.362 [2024-10-07 11:31:46.505915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.362 [2024-10-07 11:31:46.506036] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.362 [2024-10-07 11:31:46.506068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.362 [2024-10-07 11:31:46.506085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.362 [2024-10-07 11:31:46.506117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.362 [2024-10-07 11:31:46.506149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.362 [2024-10-07 11:31:46.506167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.362 [2024-10-07 11:31:46.506181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.362 [2024-10-07 11:31:46.506211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.362 [2024-10-07 11:31:46.513420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.362 [2024-10-07 11:31:46.513542] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.362 [2024-10-07 11:31:46.513574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.362 [2024-10-07 11:31:46.513591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.362 [2024-10-07 11:31:46.513623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.362 [2024-10-07 11:31:46.513655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.362 [2024-10-07 11:31:46.513673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.362 [2024-10-07 11:31:46.513704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.362 [2024-10-07 11:31:46.513737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.362 [2024-10-07 11:31:46.516403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.362 [2024-10-07 11:31:46.516515] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.362 [2024-10-07 11:31:46.516546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.362 [2024-10-07 11:31:46.516563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.362 [2024-10-07 11:31:46.516595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.362 [2024-10-07 11:31:46.516627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.362 [2024-10-07 11:31:46.516645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.362 [2024-10-07 11:31:46.516659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.362 [2024-10-07 11:31:46.516689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.362 [2024-10-07 11:31:46.524122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.362 [2024-10-07 11:31:46.524237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.362 [2024-10-07 11:31:46.524268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.362 [2024-10-07 11:31:46.524285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.362 [2024-10-07 11:31:46.524331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.362 [2024-10-07 11:31:46.524368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.362 [2024-10-07 11:31:46.524386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.362 [2024-10-07 11:31:46.524400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.362 [2024-10-07 11:31:46.524430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.362 [2024-10-07 11:31:46.526610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.362 [2024-10-07 11:31:46.526721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.362 [2024-10-07 11:31:46.526753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.362 [2024-10-07 11:31:46.526770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.362 [2024-10-07 11:31:46.527021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.362 [2024-10-07 11:31:46.527184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.362 [2024-10-07 11:31:46.527219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.362 [2024-10-07 11:31:46.527236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.362 [2024-10-07 11:31:46.527360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.362 [2024-10-07 11:31:46.534218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.362 [2024-10-07 11:31:46.534349] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.362 [2024-10-07 11:31:46.534397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.362 [2024-10-07 11:31:46.534417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.362 [2024-10-07 11:31:46.535584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.362 [2024-10-07 11:31:46.535814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.362 [2024-10-07 11:31:46.535849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.362 [2024-10-07 11:31:46.535866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.362 [2024-10-07 11:31:46.536604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.362 [2024-10-07 11:31:46.536765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.362 [2024-10-07 11:31:46.536868] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.362 [2024-10-07 11:31:46.536908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.362 [2024-10-07 11:31:46.536926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.362 [2024-10-07 11:31:46.537666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.362 [2024-10-07 11:31:46.537863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.362 [2024-10-07 11:31:46.537899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.362 [2024-10-07 11:31:46.537916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.362 [2024-10-07 11:31:46.538009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.362 [2024-10-07 11:31:46.544308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.362 [2024-10-07 11:31:46.544432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.362 [2024-10-07 11:31:46.544463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.362 [2024-10-07 11:31:46.544480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.362 [2024-10-07 11:31:46.544512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.362 [2024-10-07 11:31:46.544544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.362 [2024-10-07 11:31:46.544562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.362 [2024-10-07 11:31:46.544576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.362 [2024-10-07 11:31:46.544606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.362 [2024-10-07 11:31:46.547127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.362 [2024-10-07 11:31:46.547240] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.362 [2024-10-07 11:31:46.547272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.362 [2024-10-07 11:31:46.547289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.362 [2024-10-07 11:31:46.547335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.362 [2024-10-07 11:31:46.547397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.362 [2024-10-07 11:31:46.547416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.362 [2024-10-07 11:31:46.547431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.362 [2024-10-07 11:31:46.547462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.362 [2024-10-07 11:31:46.554683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.362 [2024-10-07 11:31:46.554808] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.362 [2024-10-07 11:31:46.554840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.362 [2024-10-07 11:31:46.554858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.362 [2024-10-07 11:31:46.554890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.362 [2024-10-07 11:31:46.554922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.362 [2024-10-07 11:31:46.554940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.362 [2024-10-07 11:31:46.554955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.362 [2024-10-07 11:31:46.554985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.363 [2024-10-07 11:31:46.557677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.363 [2024-10-07 11:31:46.557786] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.363 [2024-10-07 11:31:46.557817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.363 [2024-10-07 11:31:46.557834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.363 [2024-10-07 11:31:46.557866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.363 [2024-10-07 11:31:46.557899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.363 [2024-10-07 11:31:46.557917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.363 [2024-10-07 11:31:46.557930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.363 [2024-10-07 11:31:46.557964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.363 [2024-10-07 11:31:46.565486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.363 [2024-10-07 11:31:46.565743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.363 [2024-10-07 11:31:46.565788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.363 [2024-10-07 11:31:46.565809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.363 [2024-10-07 11:31:46.565909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.363 [2024-10-07 11:31:46.565954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.363 [2024-10-07 11:31:46.565975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.363 [2024-10-07 11:31:46.565989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.363 [2024-10-07 11:31:46.566036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.363 [2024-10-07 11:31:46.568234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.363 [2024-10-07 11:31:46.568357] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.363 [2024-10-07 11:31:46.568389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.363 [2024-10-07 11:31:46.568407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.363 [2024-10-07 11:31:46.568665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.363 [2024-10-07 11:31:46.568824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.363 [2024-10-07 11:31:46.568859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.363 [2024-10-07 11:31:46.568877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.363 [2024-10-07 11:31:46.568985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.363 [2024-10-07 11:31:46.575577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.363 [2024-10-07 11:31:46.575690] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.363 [2024-10-07 11:31:46.575722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.363 [2024-10-07 11:31:46.575739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.363 [2024-10-07 11:31:46.575771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.363 [2024-10-07 11:31:46.575804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.363 [2024-10-07 11:31:46.575822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.363 [2024-10-07 11:31:46.575836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.363 [2024-10-07 11:31:46.575866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.363 [2024-10-07 11:31:46.578356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.363 [2024-10-07 11:31:46.578466] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.363 [2024-10-07 11:31:46.578497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.363 [2024-10-07 11:31:46.578514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.363 [2024-10-07 11:31:46.578546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.363 [2024-10-07 11:31:46.578578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.363 [2024-10-07 11:31:46.578596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.363 [2024-10-07 11:31:46.578610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.363 [2024-10-07 11:31:46.578639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.363 [2024-10-07 11:31:46.585685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.363 [2024-10-07 11:31:46.585797] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.363 [2024-10-07 11:31:46.585828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.363 [2024-10-07 11:31:46.585872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.363 [2024-10-07 11:31:46.585905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.363 [2024-10-07 11:31:46.585938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.363 [2024-10-07 11:31:46.585956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.363 [2024-10-07 11:31:46.585970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.363 [2024-10-07 11:31:46.586001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.363 [2024-10-07 11:31:46.588999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.363 [2024-10-07 11:31:46.589112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.363 [2024-10-07 11:31:46.589143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.363 [2024-10-07 11:31:46.589160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.363 [2024-10-07 11:31:46.589191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.363 [2024-10-07 11:31:46.589224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.363 [2024-10-07 11:31:46.589242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.363 [2024-10-07 11:31:46.589259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.363 [2024-10-07 11:31:46.589289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.363 [2024-10-07 11:31:46.595956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.364 [2024-10-07 11:31:46.596070] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.364 [2024-10-07 11:31:46.596102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.364 [2024-10-07 11:31:46.596119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.364 [2024-10-07 11:31:46.596391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.364 [2024-10-07 11:31:46.596541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.364 [2024-10-07 11:31:46.596576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.364 [2024-10-07 11:31:46.596594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.364 [2024-10-07 11:31:46.596701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.364 [2024-10-07 11:31:46.599538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.364 [2024-10-07 11:31:46.599648] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.364 [2024-10-07 11:31:46.599680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.364 [2024-10-07 11:31:46.599698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.364 [2024-10-07 11:31:46.599730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.364 [2024-10-07 11:31:46.599765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.364 [2024-10-07 11:31:46.599799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.364 [2024-10-07 11:31:46.599814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.364 [2024-10-07 11:31:46.599846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.364 [2024-10-07 11:31:46.606047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.364 [2024-10-07 11:31:46.606167] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.364 [2024-10-07 11:31:46.606199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.364 [2024-10-07 11:31:46.606217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.364 [2024-10-07 11:31:46.606249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.364 [2024-10-07 11:31:46.606281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.364 [2024-10-07 11:31:46.606314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.364 [2024-10-07 11:31:46.606348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.364 [2024-10-07 11:31:46.606381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.364 [2024-10-07 11:31:46.609928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.364 [2024-10-07 11:31:46.610038] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.364 [2024-10-07 11:31:46.610069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.364 [2024-10-07 11:31:46.610086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.364 [2024-10-07 11:31:46.610364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.364 [2024-10-07 11:31:46.610515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.364 [2024-10-07 11:31:46.610551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.364 [2024-10-07 11:31:46.610568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.364 [2024-10-07 11:31:46.610677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.364 [2024-10-07 11:31:46.616771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.364 [2024-10-07 11:31:46.616887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.364 [2024-10-07 11:31:46.616919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.364 [2024-10-07 11:31:46.616936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.364 [2024-10-07 11:31:46.616968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.364 [2024-10-07 11:31:46.617000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.364 [2024-10-07 11:31:46.617017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.364 [2024-10-07 11:31:46.617031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.364 [2024-10-07 11:31:46.617062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.364 [2024-10-07 11:31:46.620015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.364 [2024-10-07 11:31:46.620145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.364 [2024-10-07 11:31:46.620176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.364 [2024-10-07 11:31:46.620193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.364 [2024-10-07 11:31:46.620225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.364 [2024-10-07 11:31:46.620257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.364 [2024-10-07 11:31:46.620275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.364 [2024-10-07 11:31:46.620289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.364 [2024-10-07 11:31:46.620336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.364 [2024-10-07 11:31:46.627262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.364 [2024-10-07 11:31:46.627392] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.364 [2024-10-07 11:31:46.627425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.364 [2024-10-07 11:31:46.627442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.364 [2024-10-07 11:31:46.627474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.364 [2024-10-07 11:31:46.627506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.364 [2024-10-07 11:31:46.627524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.364 [2024-10-07 11:31:46.627539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.364 [2024-10-07 11:31:46.627569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.364 [2024-10-07 11:31:46.630562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.364 [2024-10-07 11:31:46.630675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.364 [2024-10-07 11:31:46.630706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.364 [2024-10-07 11:31:46.630723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.364 [2024-10-07 11:31:46.630755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.364 [2024-10-07 11:31:46.630787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.364 [2024-10-07 11:31:46.630805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.364 [2024-10-07 11:31:46.630819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.364 [2024-10-07 11:31:46.630853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.364 [2024-10-07 11:31:46.637489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.364 [2024-10-07 11:31:46.637610] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.364 [2024-10-07 11:31:46.637641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.364 [2024-10-07 11:31:46.637658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.364 [2024-10-07 11:31:46.637928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.365 [2024-10-07 11:31:46.638087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.365 [2024-10-07 11:31:46.638122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.365 [2024-10-07 11:31:46.638140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.365 [2024-10-07 11:31:46.638246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.365 [2024-10-07 11:31:46.640985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.365 [2024-10-07 11:31:46.641094] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.365 [2024-10-07 11:31:46.641125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.365 [2024-10-07 11:31:46.641142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.365 [2024-10-07 11:31:46.641173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.365 [2024-10-07 11:31:46.641205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.365 [2024-10-07 11:31:46.641224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.365 [2024-10-07 11:31:46.641238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.365 [2024-10-07 11:31:46.641268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.365 [2024-10-07 11:31:46.647581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.365 [2024-10-07 11:31:46.647694] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.365 [2024-10-07 11:31:46.647724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.365 [2024-10-07 11:31:46.647742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.365 [2024-10-07 11:31:46.648482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.365 [2024-10-07 11:31:46.648673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.365 [2024-10-07 11:31:46.648700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.365 [2024-10-07 11:31:46.648715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.365 [2024-10-07 11:31:46.648806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.365 [2024-10-07 11:31:46.651160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.365 [2024-10-07 11:31:46.651271] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.365 [2024-10-07 11:31:46.651303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.365 [2024-10-07 11:31:46.651335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.365 [2024-10-07 11:31:46.651589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.365 [2024-10-07 11:31:46.651755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.365 [2024-10-07 11:31:46.651790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.365 [2024-10-07 11:31:46.651824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.365 [2024-10-07 11:31:46.651934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.365 [2024-10-07 11:31:46.657902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.365 [2024-10-07 11:31:46.658014] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.365 [2024-10-07 11:31:46.658046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.365 [2024-10-07 11:31:46.658063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.365 [2024-10-07 11:31:46.658095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.365 [2024-10-07 11:31:46.658127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.365 [2024-10-07 11:31:46.658145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.365 [2024-10-07 11:31:46.658159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.365 [2024-10-07 11:31:46.658189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.365 [2024-10-07 11:31:46.661249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.365 [2024-10-07 11:31:46.661371] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.365 [2024-10-07 11:31:46.661403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.365 [2024-10-07 11:31:46.661420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.365 [2024-10-07 11:31:46.662145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.365 [2024-10-07 11:31:46.662375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.365 [2024-10-07 11:31:46.662403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.365 [2024-10-07 11:31:46.662420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.365 [2024-10-07 11:31:46.662513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.365 [2024-10-07 11:31:46.668271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.365 [2024-10-07 11:31:46.668397] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.365 [2024-10-07 11:31:46.668429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.365 [2024-10-07 11:31:46.668446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.365 [2024-10-07 11:31:46.668478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.365 [2024-10-07 11:31:46.668510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.365 [2024-10-07 11:31:46.668529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.365 [2024-10-07 11:31:46.668543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.365 [2024-10-07 11:31:46.668572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.365 [2024-10-07 11:31:46.671532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.365 [2024-10-07 11:31:46.671645] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.365 [2024-10-07 11:31:46.671691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.365 [2024-10-07 11:31:46.671710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.365 [2024-10-07 11:31:46.671744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.365 [2024-10-07 11:31:46.671776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.365 [2024-10-07 11:31:46.671794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.365 [2024-10-07 11:31:46.671808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.365 [2024-10-07 11:31:46.671838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.365 [2024-10-07 11:31:46.678437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.365 [2024-10-07 11:31:46.678549] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.365 [2024-10-07 11:31:46.678580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.365 [2024-10-07 11:31:46.678597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.365 [2024-10-07 11:31:46.678854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.365 [2024-10-07 11:31:46.679019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.365 [2024-10-07 11:31:46.679057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.365 [2024-10-07 11:31:46.679075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.365 [2024-10-07 11:31:46.679182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.365 [2024-10-07 11:31:46.681914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.365 [2024-10-07 11:31:46.682022] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.365 [2024-10-07 11:31:46.682052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.365 [2024-10-07 11:31:46.682071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.365 [2024-10-07 11:31:46.682102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.365 [2024-10-07 11:31:46.682135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.365 [2024-10-07 11:31:46.682153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.365 [2024-10-07 11:31:46.682167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.365 [2024-10-07 11:31:46.682197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.365 [2024-10-07 11:31:46.688527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.365 [2024-10-07 11:31:46.688637] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.365 [2024-10-07 11:31:46.688668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.365 [2024-10-07 11:31:46.688686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.365 [2024-10-07 11:31:46.689423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.365 [2024-10-07 11:31:46.689631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.365 [2024-10-07 11:31:46.689658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.365 [2024-10-07 11:31:46.689673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.365 [2024-10-07 11:31:46.689765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.365 [2024-10-07 11:31:46.692085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.365 [2024-10-07 11:31:46.692441] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.365 [2024-10-07 11:31:46.692485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.366 [2024-10-07 11:31:46.692505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.366 [2024-10-07 11:31:46.692661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.366 [2024-10-07 11:31:46.692777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.366 [2024-10-07 11:31:46.692799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.366 [2024-10-07 11:31:46.692813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.366 [2024-10-07 11:31:46.692852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.366 [2024-10-07 11:31:46.698780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.366 [2024-10-07 11:31:46.698892] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.366 [2024-10-07 11:31:46.698923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.366 [2024-10-07 11:31:46.698940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.366 [2024-10-07 11:31:46.698972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.366 [2024-10-07 11:31:46.699005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.366 [2024-10-07 11:31:46.699023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.366 [2024-10-07 11:31:46.699037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.366 [2024-10-07 11:31:46.699067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.366 [2024-10-07 11:31:46.702171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.366 [2024-10-07 11:31:46.702279] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.366 [2024-10-07 11:31:46.702339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.366 [2024-10-07 11:31:46.702359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.366 [2024-10-07 11:31:46.703083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.366 [2024-10-07 11:31:46.703273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.366 [2024-10-07 11:31:46.703300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.366 [2024-10-07 11:31:46.703330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.366 [2024-10-07 11:31:46.703444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.366 [2024-10-07 11:31:46.709145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.366 [2024-10-07 11:31:46.709265] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.366 [2024-10-07 11:31:46.709297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.366 [2024-10-07 11:31:46.709327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.366 [2024-10-07 11:31:46.709363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.366 [2024-10-07 11:31:46.709396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.366 [2024-10-07 11:31:46.709414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.366 [2024-10-07 11:31:46.709428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.366 [2024-10-07 11:31:46.709459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.366 [2024-10-07 11:31:46.712417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.366 [2024-10-07 11:31:46.712528] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.366 [2024-10-07 11:31:46.712560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.366 [2024-10-07 11:31:46.712577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.366 [2024-10-07 11:31:46.712609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.366 [2024-10-07 11:31:46.712642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.366 [2024-10-07 11:31:46.712660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.366 [2024-10-07 11:31:46.712674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.366 [2024-10-07 11:31:46.712705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.366 [2024-10-07 11:31:46.719347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.366 [2024-10-07 11:31:46.719462] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.366 [2024-10-07 11:31:46.719493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.366 [2024-10-07 11:31:46.719511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.366 [2024-10-07 11:31:46.719764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.366 [2024-10-07 11:31:46.719923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.366 [2024-10-07 11:31:46.719958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.366 [2024-10-07 11:31:46.719976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.366 [2024-10-07 11:31:46.720084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.366 [2024-10-07 11:31:46.722849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.366 [2024-10-07 11:31:46.722960] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.366 [2024-10-07 11:31:46.722991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.366 [2024-10-07 11:31:46.723024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.366 [2024-10-07 11:31:46.723058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.366 [2024-10-07 11:31:46.723090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.366 [2024-10-07 11:31:46.723108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.366 [2024-10-07 11:31:46.723122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.366 [2024-10-07 11:31:46.723153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.366 [2024-10-07 11:31:46.729439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.366 [2024-10-07 11:31:46.729552] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.366 [2024-10-07 11:31:46.729584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.366 [2024-10-07 11:31:46.729601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.366 [2024-10-07 11:31:46.730353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.366 [2024-10-07 11:31:46.730545] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.366 [2024-10-07 11:31:46.730572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.366 [2024-10-07 11:31:46.730587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.366 [2024-10-07 11:31:46.730679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.366 [2024-10-07 11:31:46.733006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.366 [2024-10-07 11:31:46.733118] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.366 [2024-10-07 11:31:46.733148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.366 [2024-10-07 11:31:46.733166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.366 [2024-10-07 11:31:46.733431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.366 [2024-10-07 11:31:46.733579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.366 [2024-10-07 11:31:46.733614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.366 [2024-10-07 11:31:46.733631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.366 [2024-10-07 11:31:46.733739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.366 [2024-10-07 11:31:46.739759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.366 [2024-10-07 11:31:46.739880] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.366 [2024-10-07 11:31:46.739911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.366 [2024-10-07 11:31:46.739929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.366 [2024-10-07 11:31:46.739960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.366 [2024-10-07 11:31:46.739993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.366 [2024-10-07 11:31:46.740028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.366 [2024-10-07 11:31:46.740044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.366 [2024-10-07 11:31:46.740075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.366 [2024-10-07 11:31:46.743090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.366 [2024-10-07 11:31:46.743209] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.366 [2024-10-07 11:31:46.743240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.366 [2024-10-07 11:31:46.743257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.366 [2024-10-07 11:31:46.743288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.366 [2024-10-07 11:31:46.744027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.366 [2024-10-07 11:31:46.744065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.366 [2024-10-07 11:31:46.744083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.366 [2024-10-07 11:31:46.744272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.366 [2024-10-07 11:31:46.750184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.367 [2024-10-07 11:31:46.750311] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.367 [2024-10-07 11:31:46.750355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.367 [2024-10-07 11:31:46.750373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.367 [2024-10-07 11:31:46.750406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.367 [2024-10-07 11:31:46.750439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.367 [2024-10-07 11:31:46.750457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.367 [2024-10-07 11:31:46.750471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.367 [2024-10-07 11:31:46.750501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.367 [2024-10-07 11:31:46.753493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.367 [2024-10-07 11:31:46.753603] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.367 [2024-10-07 11:31:46.753634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.367 [2024-10-07 11:31:46.753651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.367 [2024-10-07 11:31:46.753683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.367 [2024-10-07 11:31:46.753715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.367 [2024-10-07 11:31:46.753733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.367 [2024-10-07 11:31:46.753747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.367 [2024-10-07 11:31:46.753777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.367 [2024-10-07 11:31:46.760610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.367 [2024-10-07 11:31:46.760741] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.367 [2024-10-07 11:31:46.760772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.367 [2024-10-07 11:31:46.760790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.367 [2024-10-07 11:31:46.761042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.367 [2024-10-07 11:31:46.761189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.367 [2024-10-07 11:31:46.761214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.367 [2024-10-07 11:31:46.761229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.367 [2024-10-07 11:31:46.761350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.367 [2024-10-07 11:31:46.764191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.367 [2024-10-07 11:31:46.764301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.367 [2024-10-07 11:31:46.764345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.367 [2024-10-07 11:31:46.764364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.367 [2024-10-07 11:31:46.764396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.367 [2024-10-07 11:31:46.764428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.367 [2024-10-07 11:31:46.764447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.367 [2024-10-07 11:31:46.764461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.367 [2024-10-07 11:31:46.764490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.367 [2024-10-07 11:31:46.770717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.367 [2024-10-07 11:31:46.770829] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.367 [2024-10-07 11:31:46.770864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.367 [2024-10-07 11:31:46.770881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.367 [2024-10-07 11:31:46.770913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.367 [2024-10-07 11:31:46.770944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.367 [2024-10-07 11:31:46.770963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.367 [2024-10-07 11:31:46.770977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.367 [2024-10-07 11:31:46.771007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.367 [2024-10-07 11:31:46.774570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.367 [2024-10-07 11:31:46.774681] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.367 [2024-10-07 11:31:46.774712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.367 [2024-10-07 11:31:46.774731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.367 [2024-10-07 11:31:46.775000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.367 [2024-10-07 11:31:46.775149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.367 [2024-10-07 11:31:46.775186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.367 [2024-10-07 11:31:46.775204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.367 [2024-10-07 11:31:46.775312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.367 [2024-10-07 11:31:46.781387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.367 [2024-10-07 11:31:46.781502] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.367 [2024-10-07 11:31:46.781534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.367 [2024-10-07 11:31:46.781551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.367 [2024-10-07 11:31:46.781583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.367 [2024-10-07 11:31:46.781615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.367 [2024-10-07 11:31:46.781633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.367 [2024-10-07 11:31:46.781648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.367 [2024-10-07 11:31:46.781678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.367 [2024-10-07 11:31:46.784663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.367 [2024-10-07 11:31:46.784774] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.367 [2024-10-07 11:31:46.784805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.367 [2024-10-07 11:31:46.784822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.367 [2024-10-07 11:31:46.784853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.367 [2024-10-07 11:31:46.784885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.367 [2024-10-07 11:31:46.784903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.367 [2024-10-07 11:31:46.784917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.367 [2024-10-07 11:31:46.784947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.367 [2024-10-07 11:31:46.791899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.367 [2024-10-07 11:31:46.792014] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.367 [2024-10-07 11:31:46.792045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.367 [2024-10-07 11:31:46.792063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.367 [2024-10-07 11:31:46.792095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.367 [2024-10-07 11:31:46.792127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.367 [2024-10-07 11:31:46.792145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.367 [2024-10-07 11:31:46.792177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.367 [2024-10-07 11:31:46.792211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.367 [2024-10-07 11:31:46.795233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.367 [2024-10-07 11:31:46.795360] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.367 [2024-10-07 11:31:46.795393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.367 [2024-10-07 11:31:46.795409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.367 [2024-10-07 11:31:46.795442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.367 [2024-10-07 11:31:46.795474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.367 [2024-10-07 11:31:46.795492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.367 [2024-10-07 11:31:46.795506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.367 [2024-10-07 11:31:46.795536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.367 [2024-10-07 11:31:46.802170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.367 [2024-10-07 11:31:46.802281] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.367 [2024-10-07 11:31:46.802338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.367 [2024-10-07 11:31:46.802358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.367 [2024-10-07 11:31:46.802611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.367 [2024-10-07 11:31:46.802759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.367 [2024-10-07 11:31:46.802795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.367 [2024-10-07 11:31:46.802813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.367 [2024-10-07 11:31:46.802921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.368 [2024-10-07 11:31:46.805711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.368 [2024-10-07 11:31:46.805819] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.368 [2024-10-07 11:31:46.805849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.368 [2024-10-07 11:31:46.805866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.368 [2024-10-07 11:31:46.805898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.368 [2024-10-07 11:31:46.805930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.368 [2024-10-07 11:31:46.805948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.368 [2024-10-07 11:31:46.805963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.368 [2024-10-07 11:31:46.805993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.368 [2024-10-07 11:31:46.812258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.368 [2024-10-07 11:31:46.812385] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.368 [2024-10-07 11:31:46.812432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.368 [2024-10-07 11:31:46.812451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.368 [2024-10-07 11:31:46.812484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.368 [2024-10-07 11:31:46.812516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.368 [2024-10-07 11:31:46.812534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.368 [2024-10-07 11:31:46.812548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.368 [2024-10-07 11:31:46.812578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.368 [2024-10-07 11:31:46.816022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.368 [2024-10-07 11:31:46.816134] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.368 [2024-10-07 11:31:46.816165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.368 [2024-10-07 11:31:46.816183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.368 [2024-10-07 11:31:46.816466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.368 [2024-10-07 11:31:46.816616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.368 [2024-10-07 11:31:46.816651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.368 [2024-10-07 11:31:46.816669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.368 [2024-10-07 11:31:46.816777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.368 [2024-10-07 11:31:46.822857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.368 [2024-10-07 11:31:46.822972] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.368 [2024-10-07 11:31:46.823005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.368 [2024-10-07 11:31:46.823022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.368 [2024-10-07 11:31:46.823054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.368 [2024-10-07 11:31:46.823086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.368 [2024-10-07 11:31:46.823105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.368 [2024-10-07 11:31:46.823119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.368 [2024-10-07 11:31:46.823149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.368 [2024-10-07 11:31:46.826111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.368 [2024-10-07 11:31:46.826223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.368 [2024-10-07 11:31:46.826255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.368 [2024-10-07 11:31:46.826272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.368 [2024-10-07 11:31:46.826332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.368 [2024-10-07 11:31:46.826388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.368 [2024-10-07 11:31:46.826407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.368 [2024-10-07 11:31:46.826421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.368 [2024-10-07 11:31:46.827144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.368 [2024-10-07 11:31:46.833368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.368 [2024-10-07 11:31:46.833487] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.368 [2024-10-07 11:31:46.833519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.368 [2024-10-07 11:31:46.833537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.368 [2024-10-07 11:31:46.833569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.368 [2024-10-07 11:31:46.833601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.368 [2024-10-07 11:31:46.833619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.368 [2024-10-07 11:31:46.833633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.368 [2024-10-07 11:31:46.833663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.368 [2024-10-07 11:31:46.836598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.368 [2024-10-07 11:31:46.836712] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.368 [2024-10-07 11:31:46.836743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.368 [2024-10-07 11:31:46.836761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.368 [2024-10-07 11:31:46.836792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.368 [2024-10-07 11:31:46.836825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.368 [2024-10-07 11:31:46.836845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.368 [2024-10-07 11:31:46.836860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.368 [2024-10-07 11:31:46.836890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.368 [2024-10-07 11:31:46.843509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.368 [2024-10-07 11:31:46.843624] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.368 [2024-10-07 11:31:46.843655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.368 [2024-10-07 11:31:46.843673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.368 [2024-10-07 11:31:46.843924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.368 [2024-10-07 11:31:46.844058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.368 [2024-10-07 11:31:46.844092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.368 [2024-10-07 11:31:46.844110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.368 [2024-10-07 11:31:46.844239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.368 [2024-10-07 11:31:46.847103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.368 [2024-10-07 11:31:46.847217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.368 [2024-10-07 11:31:46.847249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.368 [2024-10-07 11:31:46.847267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.368 [2024-10-07 11:31:46.847298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.368 [2024-10-07 11:31:46.847346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.368 [2024-10-07 11:31:46.847366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.368 [2024-10-07 11:31:46.847381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.368 [2024-10-07 11:31:46.847411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.368 [2024-10-07 11:31:46.853599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.368 [2024-10-07 11:31:46.853711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.368 [2024-10-07 11:31:46.853743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.368 [2024-10-07 11:31:46.853760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.368 [2024-10-07 11:31:46.853792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.368 [2024-10-07 11:31:46.853824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.368 [2024-10-07 11:31:46.853842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.368 [2024-10-07 11:31:46.853856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.368 [2024-10-07 11:31:46.854614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.368 [2024-10-07 11:31:46.857228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.368 [2024-10-07 11:31:46.857351] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.368 [2024-10-07 11:31:46.857383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.368 [2024-10-07 11:31:46.857400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.368 [2024-10-07 11:31:46.857652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.368 [2024-10-07 11:31:46.857786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.369 [2024-10-07 11:31:46.857820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.369 [2024-10-07 11:31:46.857837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.369 [2024-10-07 11:31:46.857945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.369 [2024-10-07 11:31:46.864049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.369 [2024-10-07 11:31:46.864164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.369 [2024-10-07 11:31:46.864195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.369 [2024-10-07 11:31:46.864231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.369 [2024-10-07 11:31:46.864264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.369 [2024-10-07 11:31:46.864297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.369 [2024-10-07 11:31:46.864329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.369 [2024-10-07 11:31:46.864346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.369 [2024-10-07 11:31:46.864377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.369 [2024-10-07 11:31:46.867333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.369 [2024-10-07 11:31:46.867445] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.369 [2024-10-07 11:31:46.867476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.369 [2024-10-07 11:31:46.867493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.369 [2024-10-07 11:31:46.867524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.369 [2024-10-07 11:31:46.867556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.369 [2024-10-07 11:31:46.867574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.369 [2024-10-07 11:31:46.867589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.369 [2024-10-07 11:31:46.868338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.369 [2024-10-07 11:31:46.874540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.369 [2024-10-07 11:31:46.874654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.369 [2024-10-07 11:31:46.874685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.369 [2024-10-07 11:31:46.874702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.369 [2024-10-07 11:31:46.874734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.369 [2024-10-07 11:31:46.874766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.369 [2024-10-07 11:31:46.874784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.369 [2024-10-07 11:31:46.874798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.369 [2024-10-07 11:31:46.874828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.369 [2024-10-07 11:31:46.877799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.369 [2024-10-07 11:31:46.877911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.369 [2024-10-07 11:31:46.877942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.369 [2024-10-07 11:31:46.877959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.369 [2024-10-07 11:31:46.877990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.369 [2024-10-07 11:31:46.878022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.369 [2024-10-07 11:31:46.878057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.369 [2024-10-07 11:31:46.878073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.369 [2024-10-07 11:31:46.878105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.369 [2024-10-07 11:31:46.884729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.369 [2024-10-07 11:31:46.884841] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.369 [2024-10-07 11:31:46.884873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.369 [2024-10-07 11:31:46.884890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.369 [2024-10-07 11:31:46.885141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.369 [2024-10-07 11:31:46.885275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.369 [2024-10-07 11:31:46.885308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.369 [2024-10-07 11:31:46.885341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.369 [2024-10-07 11:31:46.885450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.369 [2024-10-07 11:31:46.888290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.369 [2024-10-07 11:31:46.888410] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.369 [2024-10-07 11:31:46.888441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.369 [2024-10-07 11:31:46.888458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.369 [2024-10-07 11:31:46.888489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.369 [2024-10-07 11:31:46.888521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.369 [2024-10-07 11:31:46.888538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.369 [2024-10-07 11:31:46.888553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.369 [2024-10-07 11:31:46.888583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.369 [2024-10-07 11:31:46.894819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.369 [2024-10-07 11:31:46.894932] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.369 [2024-10-07 11:31:46.894963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.369 [2024-10-07 11:31:46.894981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.369 [2024-10-07 11:31:46.895012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.369 [2024-10-07 11:31:46.895043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.369 [2024-10-07 11:31:46.895061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.369 [2024-10-07 11:31:46.895075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.369 [2024-10-07 11:31:46.895813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.369 [2024-10-07 11:31:46.898458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.369 [2024-10-07 11:31:46.898572] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.369 [2024-10-07 11:31:46.898604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.369 [2024-10-07 11:31:46.898620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.369 [2024-10-07 11:31:46.898872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.369 [2024-10-07 11:31:46.899018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.369 [2024-10-07 11:31:46.899050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.369 [2024-10-07 11:31:46.899067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.369 [2024-10-07 11:31:46.899175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.369 [2024-10-07 11:31:46.905279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.369 [2024-10-07 11:31:46.905405] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.369 [2024-10-07 11:31:46.905436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.369 [2024-10-07 11:31:46.905454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.369 [2024-10-07 11:31:46.905485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.369 [2024-10-07 11:31:46.905516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.369 [2024-10-07 11:31:46.905534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.369 [2024-10-07 11:31:46.905548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.369 [2024-10-07 11:31:46.905579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.370 [2024-10-07 11:31:46.908547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.370 [2024-10-07 11:31:46.908656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.370 [2024-10-07 11:31:46.908687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.370 [2024-10-07 11:31:46.908704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.370 [2024-10-07 11:31:46.908736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.370 [2024-10-07 11:31:46.908767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.370 [2024-10-07 11:31:46.908785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.370 [2024-10-07 11:31:46.908799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.370 [2024-10-07 11:31:46.909536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.370 [2024-10-07 11:31:46.915759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.370 [2024-10-07 11:31:46.915881] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.370 [2024-10-07 11:31:46.915913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.370 [2024-10-07 11:31:46.915931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.370 [2024-10-07 11:31:46.915982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.370 [2024-10-07 11:31:46.916014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.370 [2024-10-07 11:31:46.916033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.370 [2024-10-07 11:31:46.916047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.370 [2024-10-07 11:31:46.916077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.370 [2024-10-07 11:31:46.918990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.370 [2024-10-07 11:31:46.919105] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.370 [2024-10-07 11:31:46.919136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.370 [2024-10-07 11:31:46.919154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.370 [2024-10-07 11:31:46.919186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.370 [2024-10-07 11:31:46.919218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.370 [2024-10-07 11:31:46.919236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.370 [2024-10-07 11:31:46.919250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.370 [2024-10-07 11:31:46.919280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.370 [2024-10-07 11:31:46.925947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.370 [2024-10-07 11:31:46.926066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.370 [2024-10-07 11:31:46.926099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.370 [2024-10-07 11:31:46.926116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.370 [2024-10-07 11:31:46.926411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.370 [2024-10-07 11:31:46.926564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.370 [2024-10-07 11:31:46.926600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.370 [2024-10-07 11:31:46.926618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.370 [2024-10-07 11:31:46.926727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.370 [2024-10-07 11:31:46.929492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.370 [2024-10-07 11:31:46.929602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.370 [2024-10-07 11:31:46.929633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.370 [2024-10-07 11:31:46.929650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.370 [2024-10-07 11:31:46.929682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.370 [2024-10-07 11:31:46.929713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.370 [2024-10-07 11:31:46.929731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.370 [2024-10-07 11:31:46.929767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.370 [2024-10-07 11:31:46.929800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.370 [2024-10-07 11:31:46.936038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.370 [2024-10-07 11:31:46.936153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.370 [2024-10-07 11:31:46.936184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.370 [2024-10-07 11:31:46.936201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.370 [2024-10-07 11:31:46.936233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.370 [2024-10-07 11:31:46.936265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.370 [2024-10-07 11:31:46.936283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.370 [2024-10-07 11:31:46.936297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.370 [2024-10-07 11:31:46.937039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.370 [2024-10-07 11:31:46.939726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.370 [2024-10-07 11:31:46.939837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.370 [2024-10-07 11:31:46.939868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.370 [2024-10-07 11:31:46.939885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.370 [2024-10-07 11:31:46.940141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.370 [2024-10-07 11:31:46.940289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.370 [2024-10-07 11:31:46.940336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.370 [2024-10-07 11:31:46.940356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.370 [2024-10-07 11:31:46.940465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.370 [2024-10-07 11:31:46.946488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.370 [2024-10-07 11:31:46.946602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.370 [2024-10-07 11:31:46.946633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.370 [2024-10-07 11:31:46.946651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.370 [2024-10-07 11:31:46.946682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.370 [2024-10-07 11:31:46.946714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.370 [2024-10-07 11:31:46.946731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.370 [2024-10-07 11:31:46.946746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.370 [2024-10-07 11:31:46.946776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.370 [2024-10-07 11:31:46.949815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.370 [2024-10-07 11:31:46.949925] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.370 [2024-10-07 11:31:46.949977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.370 [2024-10-07 11:31:46.949995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.370 [2024-10-07 11:31:46.950028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.370 [2024-10-07 11:31:46.950789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.370 [2024-10-07 11:31:46.950828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.370 [2024-10-07 11:31:46.950845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.370 [2024-10-07 11:31:46.951017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.370 [2024-10-07 11:31:46.956934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.370 [2024-10-07 11:31:46.957047] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.370 [2024-10-07 11:31:46.957078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.370 [2024-10-07 11:31:46.957095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.370 [2024-10-07 11:31:46.957126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.370 [2024-10-07 11:31:46.957158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.370 [2024-10-07 11:31:46.957176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.370 [2024-10-07 11:31:46.957190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.370 [2024-10-07 11:31:46.957220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.370 [2024-10-07 11:31:46.960207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.370 [2024-10-07 11:31:46.960331] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.370 [2024-10-07 11:31:46.960362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.370 [2024-10-07 11:31:46.960380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.370 [2024-10-07 11:31:46.960413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.370 [2024-10-07 11:31:46.960444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.370 [2024-10-07 11:31:46.960462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.370 [2024-10-07 11:31:46.960476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.371 [2024-10-07 11:31:46.960507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.371 [2024-10-07 11:31:46.967101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.371 [2024-10-07 11:31:46.967214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.371 [2024-10-07 11:31:46.967245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.371 [2024-10-07 11:31:46.967262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.371 [2024-10-07 11:31:46.967528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.371 [2024-10-07 11:31:46.967696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.371 [2024-10-07 11:31:46.967734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.371 [2024-10-07 11:31:46.967752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.371 [2024-10-07 11:31:46.967859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.371 [2024-10-07 11:31:46.970632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.371 [2024-10-07 11:31:46.970745] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.371 [2024-10-07 11:31:46.970775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.371 [2024-10-07 11:31:46.970792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.371 [2024-10-07 11:31:46.970824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.371 [2024-10-07 11:31:46.970856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.371 [2024-10-07 11:31:46.970873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.371 [2024-10-07 11:31:46.970888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.371 [2024-10-07 11:31:46.970917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.371 [2024-10-07 11:31:46.977190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.371 [2024-10-07 11:31:46.977302] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.371 [2024-10-07 11:31:46.977349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.371 [2024-10-07 11:31:46.977367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.371 [2024-10-07 11:31:46.977400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.371 [2024-10-07 11:31:46.977432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.371 [2024-10-07 11:31:46.977450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.371 [2024-10-07 11:31:46.977464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.371 [2024-10-07 11:31:46.978186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.371 [2024-10-07 11:31:46.980834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.371 [2024-10-07 11:31:46.980946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.371 [2024-10-07 11:31:46.980977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.371 [2024-10-07 11:31:46.980994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.371 [2024-10-07 11:31:46.981260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.371 [2024-10-07 11:31:46.981437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.371 [2024-10-07 11:31:46.981472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.371 [2024-10-07 11:31:46.981490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.371 [2024-10-07 11:31:46.981616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.371 [2024-10-07 11:31:46.987584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.371 [2024-10-07 11:31:46.987699] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.371 [2024-10-07 11:31:46.987731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.371 [2024-10-07 11:31:46.987748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.371 [2024-10-07 11:31:46.987780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.371 [2024-10-07 11:31:46.987812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.371 [2024-10-07 11:31:46.987830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.371 [2024-10-07 11:31:46.987845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.371 [2024-10-07 11:31:46.987875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.371 [2024-10-07 11:31:46.990917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.371 [2024-10-07 11:31:46.991028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.371 [2024-10-07 11:31:46.991059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.371 [2024-10-07 11:31:46.991076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.371 [2024-10-07 11:31:46.991107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.371 [2024-10-07 11:31:46.991860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.371 [2024-10-07 11:31:46.991899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.371 [2024-10-07 11:31:46.991918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.371 [2024-10-07 11:31:46.992087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.371 [2024-10-07 11:31:46.997988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.371 [2024-10-07 11:31:46.998099] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.371 [2024-10-07 11:31:46.998129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.371 [2024-10-07 11:31:46.998147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.371 [2024-10-07 11:31:46.998178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.371 [2024-10-07 11:31:46.998210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.371 [2024-10-07 11:31:46.998228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.371 [2024-10-07 11:31:46.998242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.371 [2024-10-07 11:31:46.998272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.371 [2024-10-07 11:31:47.001231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.371 [2024-10-07 11:31:47.001356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.371 [2024-10-07 11:31:47.001388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.371 [2024-10-07 11:31:47.001423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.371 [2024-10-07 11:31:47.001457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.371 [2024-10-07 11:31:47.001489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.371 [2024-10-07 11:31:47.001508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.371 [2024-10-07 11:31:47.001522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.371 [2024-10-07 11:31:47.001552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.371 [2024-10-07 11:31:47.008179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.371 [2024-10-07 11:31:47.008294] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.371 [2024-10-07 11:31:47.008339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.371 [2024-10-07 11:31:47.008359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.371 [2024-10-07 11:31:47.008611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.371 [2024-10-07 11:31:47.008770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.371 [2024-10-07 11:31:47.008806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.371 [2024-10-07 11:31:47.008823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.371 [2024-10-07 11:31:47.008931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.371 [2024-10-07 11:31:47.011702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.371 [2024-10-07 11:31:47.011812] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.371 [2024-10-07 11:31:47.011843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.371 [2024-10-07 11:31:47.011860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.371 [2024-10-07 11:31:47.011892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.371 [2024-10-07 11:31:47.011923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.371 [2024-10-07 11:31:47.011942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.371 [2024-10-07 11:31:47.011955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.371 [2024-10-07 11:31:47.011986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.371 [2024-10-07 11:31:47.018268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.371 [2024-10-07 11:31:47.018413] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.371 [2024-10-07 11:31:47.018445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.371 [2024-10-07 11:31:47.018463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.371 [2024-10-07 11:31:47.018495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.371 [2024-10-07 11:31:47.019220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.372 [2024-10-07 11:31:47.019274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.372 [2024-10-07 11:31:47.019293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.372 [2024-10-07 11:31:47.019499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.372 [2024-10-07 11:31:47.021887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.372 [2024-10-07 11:31:47.021998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.372 [2024-10-07 11:31:47.022029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.372 [2024-10-07 11:31:47.022046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.372 [2024-10-07 11:31:47.022310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.372 [2024-10-07 11:31:47.022491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.372 [2024-10-07 11:31:47.022523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.372 [2024-10-07 11:31:47.022540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.372 [2024-10-07 11:31:47.022647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.372 [2024-10-07 11:31:47.028650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.372 [2024-10-07 11:31:47.028765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.372 [2024-10-07 11:31:47.028796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.372 [2024-10-07 11:31:47.028815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.372 [2024-10-07 11:31:47.028847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.372 [2024-10-07 11:31:47.028879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.372 [2024-10-07 11:31:47.028897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.372 [2024-10-07 11:31:47.028911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.372 [2024-10-07 11:31:47.028941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.372 [2024-10-07 11:31:47.031971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.372 [2024-10-07 11:31:47.032081] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.372 [2024-10-07 11:31:47.032112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.372 [2024-10-07 11:31:47.032129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.372 [2024-10-07 11:31:47.032160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.372 [2024-10-07 11:31:47.032192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.372 [2024-10-07 11:31:47.032209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.372 [2024-10-07 11:31:47.032223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.372 [2024-10-07 11:31:47.032960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.372 [2024-10-07 11:31:47.039083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.372 [2024-10-07 11:31:47.039197] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.372 [2024-10-07 11:31:47.039228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.372 [2024-10-07 11:31:47.039246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.372 [2024-10-07 11:31:47.039278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.372 [2024-10-07 11:31:47.039310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.372 [2024-10-07 11:31:47.039345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.372 [2024-10-07 11:31:47.039361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.372 [2024-10-07 11:31:47.039392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.372 [2024-10-07 11:31:47.042440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.372 [2024-10-07 11:31:47.042556] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.372 [2024-10-07 11:31:47.042587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.372 [2024-10-07 11:31:47.042604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.372 [2024-10-07 11:31:47.042636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.372 [2024-10-07 11:31:47.042669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.372 [2024-10-07 11:31:47.042687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.372 [2024-10-07 11:31:47.042702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.372 [2024-10-07 11:31:47.042733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.372 [2024-10-07 11:31:47.049367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.372 [2024-10-07 11:31:47.049468] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.372 [2024-10-07 11:31:47.049499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.372 [2024-10-07 11:31:47.049516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.372 [2024-10-07 11:31:47.049547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.372 [2024-10-07 11:31:47.049580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.372 [2024-10-07 11:31:47.049598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.372 [2024-10-07 11:31:47.049612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.372 [2024-10-07 11:31:47.049861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.372 [2024-10-07 11:31:47.053091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.372 [2024-10-07 11:31:47.053202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.372 [2024-10-07 11:31:47.053233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.372 [2024-10-07 11:31:47.053250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.372 [2024-10-07 11:31:47.053304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.372 [2024-10-07 11:31:47.053355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.372 [2024-10-07 11:31:47.053374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.372 [2024-10-07 11:31:47.053389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.372 [2024-10-07 11:31:47.053420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.372 [2024-10-07 11:31:47.059515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.372 [2024-10-07 11:31:47.059628] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.372 [2024-10-07 11:31:47.059660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.372 [2024-10-07 11:31:47.059678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.372 [2024-10-07 11:31:47.059709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.372 [2024-10-07 11:31:47.059741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.372 [2024-10-07 11:31:47.059760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.372 [2024-10-07 11:31:47.059775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.372 [2024-10-07 11:31:47.059811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.372 [2024-10-07 11:31:47.063388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.372 [2024-10-07 11:31:47.063498] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.372 [2024-10-07 11:31:47.063529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.372 [2024-10-07 11:31:47.063547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.372 [2024-10-07 11:31:47.063797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.372 [2024-10-07 11:31:47.063952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.372 [2024-10-07 11:31:47.063978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.372 [2024-10-07 11:31:47.063993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.372 [2024-10-07 11:31:47.064100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.372 [2024-10-07 11:31:47.070305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.372 [2024-10-07 11:31:47.070440] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.372 [2024-10-07 11:31:47.070472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.372 [2024-10-07 11:31:47.070489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.372 [2024-10-07 11:31:47.070521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.372 [2024-10-07 11:31:47.070553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.372 [2024-10-07 11:31:47.070572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.372 [2024-10-07 11:31:47.070604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.372 [2024-10-07 11:31:47.070637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.372 [2024-10-07 11:31:47.073481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.372 [2024-10-07 11:31:47.073589] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.372 [2024-10-07 11:31:47.073620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.372 [2024-10-07 11:31:47.073637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.373 [2024-10-07 11:31:47.073668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.373 [2024-10-07 11:31:47.073700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.373 [2024-10-07 11:31:47.073718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.373 [2024-10-07 11:31:47.073733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.373 [2024-10-07 11:31:47.073763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.373 [2024-10-07 11:31:47.080937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.373 [2024-10-07 11:31:47.081053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.373 [2024-10-07 11:31:47.081085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.373 [2024-10-07 11:31:47.081102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.373 [2024-10-07 11:31:47.081134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.373 [2024-10-07 11:31:47.081167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.373 [2024-10-07 11:31:47.081185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.373 [2024-10-07 11:31:47.081199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.373 [2024-10-07 11:31:47.081228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.373 [2024-10-07 11:31:47.084264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.373 [2024-10-07 11:31:47.084391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.373 [2024-10-07 11:31:47.084423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.373 [2024-10-07 11:31:47.084440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.373 [2024-10-07 11:31:47.084471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.373 [2024-10-07 11:31:47.084503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.373 [2024-10-07 11:31:47.084521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.373 [2024-10-07 11:31:47.084535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.373 [2024-10-07 11:31:47.084566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.373 [2024-10-07 11:31:47.091195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.373 [2024-10-07 11:31:47.091302] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.373 [2024-10-07 11:31:47.091357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.373 [2024-10-07 11:31:47.091384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.373 [2024-10-07 11:31:47.091638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.373 [2024-10-07 11:31:47.091794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.373 [2024-10-07 11:31:47.091819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.373 [2024-10-07 11:31:47.091834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.373 [2024-10-07 11:31:47.091940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.373 [2024-10-07 11:31:47.094907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.373 [2024-10-07 11:31:47.095020] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.373 [2024-10-07 11:31:47.095052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.373 [2024-10-07 11:31:47.095069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.373 [2024-10-07 11:31:47.095101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.373 [2024-10-07 11:31:47.095133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.373 [2024-10-07 11:31:47.095151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.373 [2024-10-07 11:31:47.095164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.373 [2024-10-07 11:31:47.095194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.373 [2024-10-07 11:31:47.101345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.373 [2024-10-07 11:31:47.101460] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.373 [2024-10-07 11:31:47.101491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.373 [2024-10-07 11:31:47.101508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.373 [2024-10-07 11:31:47.101540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.373 [2024-10-07 11:31:47.101572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.373 [2024-10-07 11:31:47.101590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.373 [2024-10-07 11:31:47.101605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.373 [2024-10-07 11:31:47.101634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.373 [2024-10-07 11:31:47.105170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.373 [2024-10-07 11:31:47.105285] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.373 [2024-10-07 11:31:47.105330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.373 [2024-10-07 11:31:47.105349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.373 [2024-10-07 11:31:47.105602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.373 [2024-10-07 11:31:47.105757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.373 [2024-10-07 11:31:47.105781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.373 [2024-10-07 11:31:47.105795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.373 [2024-10-07 11:31:47.105905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.373 [2024-10-07 11:31:47.112120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.373 [2024-10-07 11:31:47.112233] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.373 [2024-10-07 11:31:47.112265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.373 [2024-10-07 11:31:47.112282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.373 [2024-10-07 11:31:47.112330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.373 [2024-10-07 11:31:47.112367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.373 [2024-10-07 11:31:47.112386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.373 [2024-10-07 11:31:47.112400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.373 [2024-10-07 11:31:47.112438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.373 [2024-10-07 11:31:47.115278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.373 [2024-10-07 11:31:47.115401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.373 [2024-10-07 11:31:47.115433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.373 [2024-10-07 11:31:47.115450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.373 [2024-10-07 11:31:47.115482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.373 [2024-10-07 11:31:47.115514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.373 [2024-10-07 11:31:47.115532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.373 [2024-10-07 11:31:47.115546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.373 [2024-10-07 11:31:47.115576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.373 [2024-10-07 11:31:47.122696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.373 [2024-10-07 11:31:47.122820] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.373 [2024-10-07 11:31:47.122861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.373 [2024-10-07 11:31:47.122879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.373 [2024-10-07 11:31:47.122911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.373 [2024-10-07 11:31:47.122943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.373 [2024-10-07 11:31:47.122962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.373 [2024-10-07 11:31:47.122976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.373 [2024-10-07 11:31:47.123024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.373 [2024-10-07 11:31:47.126096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.374 [2024-10-07 11:31:47.126207] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.374 [2024-10-07 11:31:47.126238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.374 [2024-10-07 11:31:47.126255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.374 [2024-10-07 11:31:47.126300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.374 [2024-10-07 11:31:47.126350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.374 [2024-10-07 11:31:47.126370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.374 [2024-10-07 11:31:47.126384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.374 [2024-10-07 11:31:47.126415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.374 [2024-10-07 11:31:47.133005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.374 [2024-10-07 11:31:47.133120] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.374 [2024-10-07 11:31:47.133151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.374 [2024-10-07 11:31:47.133169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.374 [2024-10-07 11:31:47.133440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.374 [2024-10-07 11:31:47.133587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.374 [2024-10-07 11:31:47.133620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.374 [2024-10-07 11:31:47.133636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.374 [2024-10-07 11:31:47.133743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.374 [2024-10-07 11:31:47.136633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.374 [2024-10-07 11:31:47.136743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.374 [2024-10-07 11:31:47.136774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.374 [2024-10-07 11:31:47.136791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.374 [2024-10-07 11:31:47.136823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.374 [2024-10-07 11:31:47.136855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.374 [2024-10-07 11:31:47.136873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.374 [2024-10-07 11:31:47.136887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.374 [2024-10-07 11:31:47.136917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.374 [2024-10-07 11:31:47.143098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.374 [2024-10-07 11:31:47.143211] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.374 [2024-10-07 11:31:47.143242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.374 [2024-10-07 11:31:47.143277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.374 [2024-10-07 11:31:47.143311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.374 [2024-10-07 11:31:47.143362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.374 [2024-10-07 11:31:47.143381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.374 [2024-10-07 11:31:47.143396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.374 [2024-10-07 11:31:47.143427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.374 [2024-10-07 11:31:47.146928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.374 [2024-10-07 11:31:47.147041] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.374 [2024-10-07 11:31:47.147072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.374 [2024-10-07 11:31:47.147090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.374 [2024-10-07 11:31:47.147374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.374 [2024-10-07 11:31:47.147541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.374 [2024-10-07 11:31:47.147575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.374 [2024-10-07 11:31:47.147592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.374 [2024-10-07 11:31:47.147700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.374 [2024-10-07 11:31:47.153816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.374 [2024-10-07 11:31:47.153927] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.374 [2024-10-07 11:31:47.153958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.374 [2024-10-07 11:31:47.153976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.374 [2024-10-07 11:31:47.154007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.374 [2024-10-07 11:31:47.154039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.374 [2024-10-07 11:31:47.154057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.374 [2024-10-07 11:31:47.154071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.374 [2024-10-07 11:31:47.154101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.374 [2024-10-07 11:31:47.157019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.374 [2024-10-07 11:31:47.157128] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.374 [2024-10-07 11:31:47.157159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.374 [2024-10-07 11:31:47.157176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.374 [2024-10-07 11:31:47.157207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.374 [2024-10-07 11:31:47.157239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.374 [2024-10-07 11:31:47.157273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.374 [2024-10-07 11:31:47.157287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.374 [2024-10-07 11:31:47.157334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.374 [2024-10-07 11:31:47.164394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.374 [2024-10-07 11:31:47.164507] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.374 [2024-10-07 11:31:47.164539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.374 [2024-10-07 11:31:47.164557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.374 [2024-10-07 11:31:47.164588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.374 [2024-10-07 11:31:47.164620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.374 [2024-10-07 11:31:47.164638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.374 [2024-10-07 11:31:47.164652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.374 [2024-10-07 11:31:47.164682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.374 [2024-10-07 11:31:47.167704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.374 [2024-10-07 11:31:47.167816] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.374 [2024-10-07 11:31:47.167848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.374 [2024-10-07 11:31:47.167865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.374 [2024-10-07 11:31:47.167896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.374 [2024-10-07 11:31:47.167928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.374 [2024-10-07 11:31:47.167946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.374 [2024-10-07 11:31:47.167959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.374 [2024-10-07 11:31:47.167989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.374 [2024-10-07 11:31:47.174750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.374 [2024-10-07 11:31:47.174866] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.374 [2024-10-07 11:31:47.174898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.374 [2024-10-07 11:31:47.174915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.374 [2024-10-07 11:31:47.175166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.374 [2024-10-07 11:31:47.175312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.374 [2024-10-07 11:31:47.175368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.374 [2024-10-07 11:31:47.175386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.374 [2024-10-07 11:31:47.175494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.374 [2024-10-07 11:31:47.178341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.374 [2024-10-07 11:31:47.178452] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.374 [2024-10-07 11:31:47.178483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.374 [2024-10-07 11:31:47.178500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.374 [2024-10-07 11:31:47.178532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.374 [2024-10-07 11:31:47.178564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.374 [2024-10-07 11:31:47.178582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.374 [2024-10-07 11:31:47.178596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.374 [2024-10-07 11:31:47.178626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.375 [2024-10-07 11:31:47.184840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.375 [2024-10-07 11:31:47.184952] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.375 [2024-10-07 11:31:47.184983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.375 [2024-10-07 11:31:47.185000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.375 [2024-10-07 11:31:47.185031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.375 [2024-10-07 11:31:47.185064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.375 [2024-10-07 11:31:47.185081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.375 [2024-10-07 11:31:47.185095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.375 [2024-10-07 11:31:47.185125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.375 [2024-10-07 11:31:47.188588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.375 [2024-10-07 11:31:47.188700] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.375 [2024-10-07 11:31:47.188731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.375 [2024-10-07 11:31:47.188748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.375 [2024-10-07 11:31:47.188998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.375 [2024-10-07 11:31:47.189157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.375 [2024-10-07 11:31:47.189193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.375 [2024-10-07 11:31:47.189210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.375 [2024-10-07 11:31:47.189330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.375 [2024-10-07 11:31:47.195401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.375 [2024-10-07 11:31:47.195512] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.375 [2024-10-07 11:31:47.195544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.375 [2024-10-07 11:31:47.195562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.375 [2024-10-07 11:31:47.195612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.375 [2024-10-07 11:31:47.195645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.375 [2024-10-07 11:31:47.195664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.375 [2024-10-07 11:31:47.195678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.375 [2024-10-07 11:31:47.195708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.375 [2024-10-07 11:31:47.198678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.375 [2024-10-07 11:31:47.198789] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.375 [2024-10-07 11:31:47.198820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.375 [2024-10-07 11:31:47.198837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.375 [2024-10-07 11:31:47.198869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.375 [2024-10-07 11:31:47.198901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.375 [2024-10-07 11:31:47.198919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.375 [2024-10-07 11:31:47.198933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.375 [2024-10-07 11:31:47.198963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.375 [2024-10-07 11:31:47.205994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.375 [2024-10-07 11:31:47.206108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.375 [2024-10-07 11:31:47.206139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.375 [2024-10-07 11:31:47.206156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.375 [2024-10-07 11:31:47.206188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.375 [2024-10-07 11:31:47.206220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.375 [2024-10-07 11:31:47.206238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.375 [2024-10-07 11:31:47.206253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.375 [2024-10-07 11:31:47.206283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.375 [2024-10-07 11:31:47.209475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.375 [2024-10-07 11:31:47.209585] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.375 [2024-10-07 11:31:47.209616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.375 [2024-10-07 11:31:47.209633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.375 [2024-10-07 11:31:47.209665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.375 [2024-10-07 11:31:47.209696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.375 [2024-10-07 11:31:47.209714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.375 [2024-10-07 11:31:47.209747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.375 [2024-10-07 11:31:47.209780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.375 [2024-10-07 11:31:47.216480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.375 [2024-10-07 11:31:47.216593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.375 [2024-10-07 11:31:47.216624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.375 [2024-10-07 11:31:47.216642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.375 [2024-10-07 11:31:47.216893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.375 [2024-10-07 11:31:47.217041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.375 [2024-10-07 11:31:47.217076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.375 [2024-10-07 11:31:47.217093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.375 [2024-10-07 11:31:47.217201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.375 [2024-10-07 11:31:47.220097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.375 [2024-10-07 11:31:47.220208] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.375 [2024-10-07 11:31:47.220238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.375 [2024-10-07 11:31:47.220255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.375 [2024-10-07 11:31:47.220287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.375 [2024-10-07 11:31:47.220334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.375 [2024-10-07 11:31:47.220355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.375 [2024-10-07 11:31:47.220369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.375 [2024-10-07 11:31:47.220401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.375 [2024-10-07 11:31:47.226570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.375 [2024-10-07 11:31:47.226690] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.375 [2024-10-07 11:31:47.226721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.375 [2024-10-07 11:31:47.226739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.375 [2024-10-07 11:31:47.226771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.375 [2024-10-07 11:31:47.226802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.375 [2024-10-07 11:31:47.226821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.375 [2024-10-07 11:31:47.226835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.375 [2024-10-07 11:31:47.226864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.375 [2024-10-07 11:31:47.230408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.375 [2024-10-07 11:31:47.230540] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.375 [2024-10-07 11:31:47.230572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.375 [2024-10-07 11:31:47.230590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.375 [2024-10-07 11:31:47.230852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.375 [2024-10-07 11:31:47.230999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.375 [2024-10-07 11:31:47.231035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.375 [2024-10-07 11:31:47.231053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.375 [2024-10-07 11:31:47.231160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.375 [2024-10-07 11:31:47.237287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.375 [2024-10-07 11:31:47.237411] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.375 [2024-10-07 11:31:47.237443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.375 [2024-10-07 11:31:47.237460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.375 [2024-10-07 11:31:47.237491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.375 [2024-10-07 11:31:47.237523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.375 [2024-10-07 11:31:47.237541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.376 [2024-10-07 11:31:47.237556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.376 [2024-10-07 11:31:47.237586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.376 [2024-10-07 11:31:47.240515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.376 [2024-10-07 11:31:47.240626] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.376 [2024-10-07 11:31:47.240656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.376 [2024-10-07 11:31:47.240674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.376 [2024-10-07 11:31:47.240705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.376 [2024-10-07 11:31:47.240737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.376 [2024-10-07 11:31:47.240755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.376 [2024-10-07 11:31:47.240769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.376 [2024-10-07 11:31:47.240798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.376 [2024-10-07 11:31:47.247767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.376 [2024-10-07 11:31:47.247882] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.376 [2024-10-07 11:31:47.247913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.376 [2024-10-07 11:31:47.247930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.376 [2024-10-07 11:31:47.247962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.376 [2024-10-07 11:31:47.248013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.376 [2024-10-07 11:31:47.248033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.376 [2024-10-07 11:31:47.248047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.376 [2024-10-07 11:31:47.248077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.376 [2024-10-07 11:31:47.251068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.376 [2024-10-07 11:31:47.251180] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.376 [2024-10-07 11:31:47.251212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.376 [2024-10-07 11:31:47.251229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.376 [2024-10-07 11:31:47.251261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.376 [2024-10-07 11:31:47.251293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.376 [2024-10-07 11:31:47.251310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.376 [2024-10-07 11:31:47.251343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.376 [2024-10-07 11:31:47.251377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.376 [2024-10-07 11:31:47.258031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.376 [2024-10-07 11:31:47.258145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.376 [2024-10-07 11:31:47.258177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.376 [2024-10-07 11:31:47.258195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.376 [2024-10-07 11:31:47.258499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.376 [2024-10-07 11:31:47.258651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.376 [2024-10-07 11:31:47.258687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.376 [2024-10-07 11:31:47.258706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.376 [2024-10-07 11:31:47.258813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.376 [2024-10-07 11:31:47.261579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.376 [2024-10-07 11:31:47.261688] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.376 [2024-10-07 11:31:47.261719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.376 [2024-10-07 11:31:47.261736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.376 [2024-10-07 11:31:47.261767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.376 [2024-10-07 11:31:47.261799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.376 [2024-10-07 11:31:47.261817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.376 [2024-10-07 11:31:47.261831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.376 [2024-10-07 11:31:47.261877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.376 [2024-10-07 11:31:47.268123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.376 [2024-10-07 11:31:47.268237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.376 [2024-10-07 11:31:47.268268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.376 [2024-10-07 11:31:47.268286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.376 [2024-10-07 11:31:47.268333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.376 [2024-10-07 11:31:47.268369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.376 [2024-10-07 11:31:47.268388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.376 [2024-10-07 11:31:47.268401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.376 [2024-10-07 11:31:47.268431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.376 [2024-10-07 11:31:47.271869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.376 [2024-10-07 11:31:47.271983] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.376 [2024-10-07 11:31:47.272013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.376 [2024-10-07 11:31:47.272031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.376 [2024-10-07 11:31:47.272282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.376 [2024-10-07 11:31:47.272443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.376 [2024-10-07 11:31:47.272477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.376 [2024-10-07 11:31:47.272495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.376 [2024-10-07 11:31:47.272602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.376 [2024-10-07 11:31:47.278711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.376 [2024-10-07 11:31:47.278825] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.376 [2024-10-07 11:31:47.278857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.376 [2024-10-07 11:31:47.278875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.376 [2024-10-07 11:31:47.278907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.376 [2024-10-07 11:31:47.278938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.376 [2024-10-07 11:31:47.278957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.376 [2024-10-07 11:31:47.278971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.376 [2024-10-07 11:31:47.279000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.376 [2024-10-07 11:31:47.281962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.376 [2024-10-07 11:31:47.282080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.376 [2024-10-07 11:31:47.282112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.376 [2024-10-07 11:31:47.282145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.376 [2024-10-07 11:31:47.282179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.376 [2024-10-07 11:31:47.282212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.376 [2024-10-07 11:31:47.282230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.376 [2024-10-07 11:31:47.282243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.376 [2024-10-07 11:31:47.282999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.376 [2024-10-07 11:31:47.289188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.376 [2024-10-07 11:31:47.289302] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.376 [2024-10-07 11:31:47.289347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.376 [2024-10-07 11:31:47.289365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.376 [2024-10-07 11:31:47.289397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.376 [2024-10-07 11:31:47.289430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.376 [2024-10-07 11:31:47.289448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.376 [2024-10-07 11:31:47.289462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.376 [2024-10-07 11:31:47.289492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.376 [2024-10-07 11:31:47.292479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.376 [2024-10-07 11:31:47.292593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.376 [2024-10-07 11:31:47.292632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.376 [2024-10-07 11:31:47.292649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.376 [2024-10-07 11:31:47.292681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.376 [2024-10-07 11:31:47.292713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.377 [2024-10-07 11:31:47.292731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.377 [2024-10-07 11:31:47.292745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.377 [2024-10-07 11:31:47.292777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.377 [2024-10-07 11:31:47.299362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.377 [2024-10-07 11:31:47.299476] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.377 [2024-10-07 11:31:47.299508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.377 [2024-10-07 11:31:47.299525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.377 [2024-10-07 11:31:47.299776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.377 [2024-10-07 11:31:47.299922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.377 [2024-10-07 11:31:47.299970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.377 [2024-10-07 11:31:47.299988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.377 [2024-10-07 11:31:47.300099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.377 [2024-10-07 11:31:47.302948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.377 [2024-10-07 11:31:47.303061] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.377 [2024-10-07 11:31:47.303092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.377 [2024-10-07 11:31:47.303109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.377 [2024-10-07 11:31:47.303141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.377 [2024-10-07 11:31:47.303173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.377 [2024-10-07 11:31:47.303190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.377 [2024-10-07 11:31:47.303205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.377 [2024-10-07 11:31:47.303236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.377 [2024-10-07 11:31:47.309452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.377 [2024-10-07 11:31:47.309564] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.377 [2024-10-07 11:31:47.309596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.377 [2024-10-07 11:31:47.309613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.377 [2024-10-07 11:31:47.309645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.377 [2024-10-07 11:31:47.309677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.377 [2024-10-07 11:31:47.309695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.377 [2024-10-07 11:31:47.309710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.377 [2024-10-07 11:31:47.310456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.377 [2024-10-07 11:31:47.313083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.377 [2024-10-07 11:31:47.313192] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.377 [2024-10-07 11:31:47.313223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.377 [2024-10-07 11:31:47.313240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.377 [2024-10-07 11:31:47.313517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.377 [2024-10-07 11:31:47.313681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.377 [2024-10-07 11:31:47.313716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.377 [2024-10-07 11:31:47.313733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.377 [2024-10-07 11:31:47.313841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.377 [2024-10-07 11:31:47.319939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.377 [2024-10-07 11:31:47.320064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.377 [2024-10-07 11:31:47.320095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.377 [2024-10-07 11:31:47.320112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.377 [2024-10-07 11:31:47.320144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.377 [2024-10-07 11:31:47.320177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.377 [2024-10-07 11:31:47.320195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.377 [2024-10-07 11:31:47.320209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.377 [2024-10-07 11:31:47.320239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.377 [2024-10-07 11:31:47.323172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.377 [2024-10-07 11:31:47.323283] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.377 [2024-10-07 11:31:47.323314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.377 [2024-10-07 11:31:47.323349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.377 [2024-10-07 11:31:47.323387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.377 [2024-10-07 11:31:47.323420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.377 [2024-10-07 11:31:47.323438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.377 [2024-10-07 11:31:47.323453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.377 [2024-10-07 11:31:47.324184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.377 [2024-10-07 11:31:47.330434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.377 [2024-10-07 11:31:47.330553] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.377 [2024-10-07 11:31:47.330584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.377 [2024-10-07 11:31:47.330601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.377 [2024-10-07 11:31:47.330633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.377 [2024-10-07 11:31:47.330665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.377 [2024-10-07 11:31:47.330683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.377 [2024-10-07 11:31:47.330697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.377 [2024-10-07 11:31:47.330727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.377 [2024-10-07 11:31:47.333677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.377 [2024-10-07 11:31:47.333787] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.377 [2024-10-07 11:31:47.333819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.377 [2024-10-07 11:31:47.333836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.377 [2024-10-07 11:31:47.333886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.377 [2024-10-07 11:31:47.333920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.377 [2024-10-07 11:31:47.333937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.377 [2024-10-07 11:31:47.333951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.377 [2024-10-07 11:31:47.333982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.377 [2024-10-07 11:31:47.340570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.377 [2024-10-07 11:31:47.340690] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.377 [2024-10-07 11:31:47.340721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.377 [2024-10-07 11:31:47.340738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.377 [2024-10-07 11:31:47.340989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.377 [2024-10-07 11:31:47.341134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.377 [2024-10-07 11:31:47.341167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.377 [2024-10-07 11:31:47.341184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.377 [2024-10-07 11:31:47.341291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.378 [2024-10-07 11:31:47.344118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.378 [2024-10-07 11:31:47.344229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.378 [2024-10-07 11:31:47.344260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.378 [2024-10-07 11:31:47.344277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.378 [2024-10-07 11:31:47.344308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.378 [2024-10-07 11:31:47.344358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.378 [2024-10-07 11:31:47.344377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.378 [2024-10-07 11:31:47.344391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.378 [2024-10-07 11:31:47.344422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.378 [2024-10-07 11:31:47.350669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.378 [2024-10-07 11:31:47.350782] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.378 [2024-10-07 11:31:47.350813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.378 [2024-10-07 11:31:47.350831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.378 [2024-10-07 11:31:47.350862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.378 [2024-10-07 11:31:47.350895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.378 [2024-10-07 11:31:47.350913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.378 [2024-10-07 11:31:47.350944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.378 [2024-10-07 11:31:47.351690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.378 [2024-10-07 11:31:47.354274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.378 [2024-10-07 11:31:47.354421] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.378 [2024-10-07 11:31:47.354453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.378 [2024-10-07 11:31:47.354471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.378 [2024-10-07 11:31:47.354723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.378 [2024-10-07 11:31:47.354867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.378 [2024-10-07 11:31:47.354912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.378 [2024-10-07 11:31:47.354928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.378 [2024-10-07 11:31:47.355036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.378 [2024-10-07 11:31:47.361074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.378 [2024-10-07 11:31:47.361187] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.378 [2024-10-07 11:31:47.361224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.378 [2024-10-07 11:31:47.361241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.378 [2024-10-07 11:31:47.361273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.378 [2024-10-07 11:31:47.361305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.378 [2024-10-07 11:31:47.361337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.378 [2024-10-07 11:31:47.361352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.378 [2024-10-07 11:31:47.361383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.378 [2024-10-07 11:31:47.364396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.378 [2024-10-07 11:31:47.364511] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.378 [2024-10-07 11:31:47.364542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.378 [2024-10-07 11:31:47.364559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.378 [2024-10-07 11:31:47.364590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.378 [2024-10-07 11:31:47.364623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.378 [2024-10-07 11:31:47.364641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.378 [2024-10-07 11:31:47.364655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.378 [2024-10-07 11:31:47.365393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.378 [2024-10-07 11:31:47.371537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.378 [2024-10-07 11:31:47.371671] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.378 [2024-10-07 11:31:47.371703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.378 [2024-10-07 11:31:47.371720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.378 [2024-10-07 11:31:47.371752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.378 [2024-10-07 11:31:47.371784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.378 [2024-10-07 11:31:47.371802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.378 [2024-10-07 11:31:47.371816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.378 [2024-10-07 11:31:47.371846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.378 [2024-10-07 11:31:47.374788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.378 [2024-10-07 11:31:47.374900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.378 [2024-10-07 11:31:47.374930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.378 [2024-10-07 11:31:47.374947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.378 [2024-10-07 11:31:47.374979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.378 [2024-10-07 11:31:47.375011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.378 [2024-10-07 11:31:47.375029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.378 [2024-10-07 11:31:47.375043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.378 [2024-10-07 11:31:47.375082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.378 [2024-10-07 11:31:47.381703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.378 [2024-10-07 11:31:47.381816] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.378 [2024-10-07 11:31:47.381847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.378 [2024-10-07 11:31:47.381869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.378 [2024-10-07 11:31:47.382120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.378 [2024-10-07 11:31:47.382279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.378 [2024-10-07 11:31:47.382339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.378 [2024-10-07 11:31:47.382358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.378 [2024-10-07 11:31:47.382467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.378 [2024-10-07 11:31:47.385197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.378 [2024-10-07 11:31:47.385306] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.378 [2024-10-07 11:31:47.385363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.378 [2024-10-07 11:31:47.385381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.378 [2024-10-07 11:31:47.385413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.378 [2024-10-07 11:31:47.385462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.378 [2024-10-07 11:31:47.385481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.378 [2024-10-07 11:31:47.385496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.378 [2024-10-07 11:31:47.385526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.378 [2024-10-07 11:31:47.391788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.378 [2024-10-07 11:31:47.391901] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.378 [2024-10-07 11:31:47.391932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.378 [2024-10-07 11:31:47.391949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.378 [2024-10-07 11:31:47.391981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.378 [2024-10-07 11:31:47.392013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.378 [2024-10-07 11:31:47.392031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.378 [2024-10-07 11:31:47.392045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.378 [2024-10-07 11:31:47.392785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.378 [2024-10-07 11:31:47.395452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.378 [2024-10-07 11:31:47.395564] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.378 [2024-10-07 11:31:47.395594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.378 [2024-10-07 11:31:47.395611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.378 [2024-10-07 11:31:47.395887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.378 [2024-10-07 11:31:47.396072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.378 [2024-10-07 11:31:47.396107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.378 [2024-10-07 11:31:47.396124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.378 [2024-10-07 11:31:47.396231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.378 [2024-10-07 11:31:47.402210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.378 [2024-10-07 11:31:47.402357] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.378 [2024-10-07 11:31:47.402389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.378 [2024-10-07 11:31:47.402407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.379 [2024-10-07 11:31:47.402439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.379 [2024-10-07 11:31:47.402472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.379 [2024-10-07 11:31:47.402490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.379 [2024-10-07 11:31:47.402504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.379 [2024-10-07 11:31:47.402553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.379 [2024-10-07 11:31:47.405542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.379 [2024-10-07 11:31:47.405652] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.379 [2024-10-07 11:31:47.405692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.379 [2024-10-07 11:31:47.405709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.379 [2024-10-07 11:31:47.405741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.379 [2024-10-07 11:31:47.406492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.379 [2024-10-07 11:31:47.406531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.379 [2024-10-07 11:31:47.406548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.379 [2024-10-07 11:31:47.406719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.379 [2024-10-07 11:31:47.412676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.379 [2024-10-07 11:31:47.412789] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.379 [2024-10-07 11:31:47.412820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.379 [2024-10-07 11:31:47.412837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.379 [2024-10-07 11:31:47.412880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.379 [2024-10-07 11:31:47.412912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.379 [2024-10-07 11:31:47.412930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.379 [2024-10-07 11:31:47.412944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.379 [2024-10-07 11:31:47.412974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.379 [2024-10-07 11:31:47.415968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.379 [2024-10-07 11:31:47.416081] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.379 [2024-10-07 11:31:47.416113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.379 [2024-10-07 11:31:47.416130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.379 [2024-10-07 11:31:47.416162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.379 [2024-10-07 11:31:47.416195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.379 [2024-10-07 11:31:47.416212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.379 [2024-10-07 11:31:47.416226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.379 [2024-10-07 11:31:47.416257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.379 [2024-10-07 11:31:47.422960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.379 [2024-10-07 11:31:47.423072] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.379 [2024-10-07 11:31:47.423104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.379 [2024-10-07 11:31:47.423138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.379 [2024-10-07 11:31:47.423413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.379 [2024-10-07 11:31:47.423575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.379 [2024-10-07 11:31:47.423610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.379 [2024-10-07 11:31:47.423628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.379 [2024-10-07 11:31:47.423735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.379 [2024-10-07 11:31:47.426523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.379 [2024-10-07 11:31:47.426635] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.379 [2024-10-07 11:31:47.426666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.379 [2024-10-07 11:31:47.426683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.379 [2024-10-07 11:31:47.426715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.379 [2024-10-07 11:31:47.426748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.379 [2024-10-07 11:31:47.426766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.379 [2024-10-07 11:31:47.426780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.379 [2024-10-07 11:31:47.426810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.379 [2024-10-07 11:31:47.433048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.379 [2024-10-07 11:31:47.433160] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.379 [2024-10-07 11:31:47.433191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.379 [2024-10-07 11:31:47.433208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.379 [2024-10-07 11:31:47.433239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.379 [2024-10-07 11:31:47.433272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.379 [2024-10-07 11:31:47.433291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.379 [2024-10-07 11:31:47.433305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.379 [2024-10-07 11:31:47.433350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.379 [2024-10-07 11:31:47.436748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.379 [2024-10-07 11:31:47.436864] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.379 [2024-10-07 11:31:47.436896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.379 [2024-10-07 11:31:47.436913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.379 [2024-10-07 11:31:47.437165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.379 [2024-10-07 11:31:47.437341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.379 [2024-10-07 11:31:47.437401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.379 [2024-10-07 11:31:47.437419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.379 [2024-10-07 11:31:47.437529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.379 [2024-10-07 11:31:47.443565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.379 [2024-10-07 11:31:47.443680] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.379 [2024-10-07 11:31:47.443712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.379 [2024-10-07 11:31:47.443729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.379 [2024-10-07 11:31:47.443761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.379 [2024-10-07 11:31:47.443793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.379 [2024-10-07 11:31:47.443811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.379 [2024-10-07 11:31:47.443826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.379 [2024-10-07 11:31:47.443855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.379 [2024-10-07 11:31:47.446836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.379 [2024-10-07 11:31:47.446947] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.379 [2024-10-07 11:31:47.446977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.379 [2024-10-07 11:31:47.446994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.379 [2024-10-07 11:31:47.447026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.379 [2024-10-07 11:31:47.447059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.379 [2024-10-07 11:31:47.447076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.379 [2024-10-07 11:31:47.447091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.379 [2024-10-07 11:31:47.447844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.379 [2024-10-07 11:31:47.454010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.379 [2024-10-07 11:31:47.454126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.379 [2024-10-07 11:31:47.454157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.379 [2024-10-07 11:31:47.454174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.379 [2024-10-07 11:31:47.454206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.379 [2024-10-07 11:31:47.454238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.379 [2024-10-07 11:31:47.454257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.379 [2024-10-07 11:31:47.454270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.379 [2024-10-07 11:31:47.454313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.379 [2024-10-07 11:31:47.457327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.379 [2024-10-07 11:31:47.457439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.379 [2024-10-07 11:31:47.457470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.379 [2024-10-07 11:31:47.457487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.379 [2024-10-07 11:31:47.457519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.379 [2024-10-07 11:31:47.457558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.379 [2024-10-07 11:31:47.457576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.379 [2024-10-07 11:31:47.457589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.380 [2024-10-07 11:31:47.457620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.380 [2024-10-07 11:31:47.464306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.380 [2024-10-07 11:31:47.464439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.380 [2024-10-07 11:31:47.464469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.380 [2024-10-07 11:31:47.464487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.380 [2024-10-07 11:31:47.464740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.380 [2024-10-07 11:31:47.464900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.380 [2024-10-07 11:31:47.464935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.380 [2024-10-07 11:31:47.464952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.380 [2024-10-07 11:31:47.465059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.380 [2024-10-07 11:31:47.467847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.380 [2024-10-07 11:31:47.467958] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.380 [2024-10-07 11:31:47.467989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.380 [2024-10-07 11:31:47.468006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.380 [2024-10-07 11:31:47.468042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.380 [2024-10-07 11:31:47.468074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.380 [2024-10-07 11:31:47.468092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.380 [2024-10-07 11:31:47.468106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.380 [2024-10-07 11:31:47.468136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.380 [2024-10-07 11:31:47.474418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.380 [2024-10-07 11:31:47.474528] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.380 [2024-10-07 11:31:47.474560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.380 [2024-10-07 11:31:47.474577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.380 [2024-10-07 11:31:47.474628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.380 [2024-10-07 11:31:47.474661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.380 [2024-10-07 11:31:47.474678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.380 [2024-10-07 11:31:47.474693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.380 [2024-10-07 11:31:47.475431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.380 [2024-10-07 11:31:47.478070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.380 [2024-10-07 11:31:47.478182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.380 [2024-10-07 11:31:47.478213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.380 [2024-10-07 11:31:47.478229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.380 [2024-10-07 11:31:47.478520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.380 [2024-10-07 11:31:47.478682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.380 [2024-10-07 11:31:47.478743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.380 [2024-10-07 11:31:47.478759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.380 [2024-10-07 11:31:47.478867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.380 [2024-10-07 11:31:47.484872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.380 [2024-10-07 11:31:47.484985] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.380 [2024-10-07 11:31:47.485015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.380 [2024-10-07 11:31:47.485032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.380 [2024-10-07 11:31:47.485064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.380 [2024-10-07 11:31:47.485096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.380 [2024-10-07 11:31:47.485113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.380 [2024-10-07 11:31:47.485128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.380 [2024-10-07 11:31:47.485157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.380 [2024-10-07 11:31:47.488163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.380 [2024-10-07 11:31:47.488273] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.380 [2024-10-07 11:31:47.488304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.380 [2024-10-07 11:31:47.488338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.380 [2024-10-07 11:31:47.488373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.380 [2024-10-07 11:31:47.488405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.380 [2024-10-07 11:31:47.488423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.380 [2024-10-07 11:31:47.488454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.380 [2024-10-07 11:31:47.489178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.380 [2024-10-07 11:31:47.495303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.380 [2024-10-07 11:31:47.495432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.380 [2024-10-07 11:31:47.495464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.380 [2024-10-07 11:31:47.495481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.380 [2024-10-07 11:31:47.495514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.380 [2024-10-07 11:31:47.495546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.380 [2024-10-07 11:31:47.495564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.380 [2024-10-07 11:31:47.495578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.380 [2024-10-07 11:31:47.495608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.380 8890.45 IOPS, 34.73 MiB/s [2024-10-07T11:31:52.903Z] [2024-10-07 11:31:47.501305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.380 [2024-10-07 11:31:47.502339] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.380 [2024-10-07 11:31:47.502384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.380 [2024-10-07 11:31:47.502405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.380 [2024-10-07 11:31:47.503204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.380 [2024-10-07 11:31:47.503406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.380 [2024-10-07 11:31:47.503433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.380 [2024-10-07 11:31:47.503448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.380 [2024-10-07 11:31:47.504776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.380 [2024-10-07 11:31:47.505611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.380 [2024-10-07 11:31:47.505942] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.380 [2024-10-07 11:31:47.505985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.380 [2024-10-07 11:31:47.506004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.380 [2024-10-07 11:31:47.506132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.380 [2024-10-07 11:31:47.506244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.380 [2024-10-07 11:31:47.506271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.380 [2024-10-07 11:31:47.506297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.380 [2024-10-07 11:31:47.506353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.380 [2024-10-07 11:31:47.512283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.380 [2024-10-07 11:31:47.512434] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.380 [2024-10-07 11:31:47.512466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.380 [2024-10-07 11:31:47.512484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.380 [2024-10-07 11:31:47.512516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.380 [2024-10-07 11:31:47.512548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.380 [2024-10-07 11:31:47.512566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.380 [2024-10-07 11:31:47.512580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.380 [2024-10-07 11:31:47.512611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.380 [2024-10-07 11:31:47.515699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.380 [2024-10-07 11:31:47.515812] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.380 [2024-10-07 11:31:47.515842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.380 [2024-10-07 11:31:47.515859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.380 [2024-10-07 11:31:47.516598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.380 [2024-10-07 11:31:47.516789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.380 [2024-10-07 11:31:47.516816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.380 [2024-10-07 11:31:47.516832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.380 [2024-10-07 11:31:47.516923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.380 [2024-10-07 11:31:47.522748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.380 [2024-10-07 11:31:47.522859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.380 [2024-10-07 11:31:47.522889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.380 [2024-10-07 11:31:47.522906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.380 [2024-10-07 11:31:47.522938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.381 [2024-10-07 11:31:47.522970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.381 [2024-10-07 11:31:47.522988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.381 [2024-10-07 11:31:47.523002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.381 [2024-10-07 11:31:47.523032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.381 [2024-10-07 11:31:47.525980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.381 [2024-10-07 11:31:47.526100] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.381 [2024-10-07 11:31:47.526131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.381 [2024-10-07 11:31:47.526148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.381 [2024-10-07 11:31:47.526197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.381 [2024-10-07 11:31:47.526231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.381 [2024-10-07 11:31:47.526249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.381 [2024-10-07 11:31:47.526263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.381 [2024-10-07 11:31:47.526306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.381 [2024-10-07 11:31:47.532846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.381 [2024-10-07 11:31:47.532960] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.381 [2024-10-07 11:31:47.532992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.381 [2024-10-07 11:31:47.533011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.381 [2024-10-07 11:31:47.533262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.381 [2024-10-07 11:31:47.533413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.381 [2024-10-07 11:31:47.533448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.381 [2024-10-07 11:31:47.533465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.381 [2024-10-07 11:31:47.533573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.381 [2024-10-07 11:31:47.536417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.381 [2024-10-07 11:31:47.536529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.381 [2024-10-07 11:31:47.536561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.381 [2024-10-07 11:31:47.536579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.381 [2024-10-07 11:31:47.536611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.381 [2024-10-07 11:31:47.536643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.381 [2024-10-07 11:31:47.536661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.381 [2024-10-07 11:31:47.536675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.381 [2024-10-07 11:31:47.536705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.381 [2024-10-07 11:31:47.542935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.381 [2024-10-07 11:31:47.543051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.381 [2024-10-07 11:31:47.543082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.381 [2024-10-07 11:31:47.543100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.381 [2024-10-07 11:31:47.543131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.381 [2024-10-07 11:31:47.543885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.381 [2024-10-07 11:31:47.543922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.381 [2024-10-07 11:31:47.543957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.381 [2024-10-07 11:31:47.544149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.381 [2024-10-07 11:31:47.546529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.381 [2024-10-07 11:31:47.546640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.381 [2024-10-07 11:31:47.546671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.381 [2024-10-07 11:31:47.546689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.381 [2024-10-07 11:31:47.546951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.381 [2024-10-07 11:31:47.547084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.381 [2024-10-07 11:31:47.547107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.381 [2024-10-07 11:31:47.547121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.381 [2024-10-07 11:31:47.547226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.381 [2024-10-07 11:31:47.553447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.381 [2024-10-07 11:31:47.553562] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.381 [2024-10-07 11:31:47.553593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.381 [2024-10-07 11:31:47.553610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.381 [2024-10-07 11:31:47.553642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.381 [2024-10-07 11:31:47.553674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.381 [2024-10-07 11:31:47.553691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.381 [2024-10-07 11:31:47.553706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.381 [2024-10-07 11:31:47.553736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.381 [2024-10-07 11:31:47.556617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.381 [2024-10-07 11:31:47.556725] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.381 [2024-10-07 11:31:47.556756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.381 [2024-10-07 11:31:47.556773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.381 [2024-10-07 11:31:47.556804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.381 [2024-10-07 11:31:47.556836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.381 [2024-10-07 11:31:47.556855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.381 [2024-10-07 11:31:47.556869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.381 [2024-10-07 11:31:47.556898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.381 [2024-10-07 11:31:47.563984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.381 [2024-10-07 11:31:47.564097] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.381 [2024-10-07 11:31:47.564175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.381 [2024-10-07 11:31:47.564195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.381 [2024-10-07 11:31:47.564229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.381 [2024-10-07 11:31:47.564262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.381 [2024-10-07 11:31:47.564280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.381 [2024-10-07 11:31:47.564293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.381 [2024-10-07 11:31:47.564338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.381 [2024-10-07 11:31:47.567604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.381 [2024-10-07 11:31:47.567775] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.381 [2024-10-07 11:31:47.567835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.381 [2024-10-07 11:31:47.567867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.381 [2024-10-07 11:31:47.567920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.381 [2024-10-07 11:31:47.567977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.381 [2024-10-07 11:31:47.568006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.381 [2024-10-07 11:31:47.568030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.381 [2024-10-07 11:31:47.568075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.381 [2024-10-07 11:31:47.574381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.381 [2024-10-07 11:31:47.574510] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.381 [2024-10-07 11:31:47.574551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.381 [2024-10-07 11:31:47.574571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.381 [2024-10-07 11:31:47.574827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.381 [2024-10-07 11:31:47.574974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.381 [2024-10-07 11:31:47.575006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.381 [2024-10-07 11:31:47.575023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.382 [2024-10-07 11:31:47.575132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.382 [2024-10-07 11:31:47.578361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.382 [2024-10-07 11:31:47.578516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.382 [2024-10-07 11:31:47.578574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.382 [2024-10-07 11:31:47.578605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.382 [2024-10-07 11:31:47.578664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.382 [2024-10-07 11:31:47.578738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.382 [2024-10-07 11:31:47.578768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.382 [2024-10-07 11:31:47.578791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.382 [2024-10-07 11:31:47.578835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.382 [2024-10-07 11:31:47.585101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.382 [2024-10-07 11:31:47.586184] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.382 [2024-10-07 11:31:47.586248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.382 [2024-10-07 11:31:47.586280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.382 [2024-10-07 11:31:47.586569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.382 [2024-10-07 11:31:47.586763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.382 [2024-10-07 11:31:47.586819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.382 [2024-10-07 11:31:47.586846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.382 [2024-10-07 11:31:47.588430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.382 [2024-10-07 11:31:47.589695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.382 [2024-10-07 11:31:47.589835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.382 [2024-10-07 11:31:47.589879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.382 [2024-10-07 11:31:47.589899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.382 [2024-10-07 11:31:47.589933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.382 [2024-10-07 11:31:47.589966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.382 [2024-10-07 11:31:47.589985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.382 [2024-10-07 11:31:47.590003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.382 [2024-10-07 11:31:47.590034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.382 [2024-10-07 11:31:47.595566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.382 [2024-10-07 11:31:47.595684] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.382 [2024-10-07 11:31:47.595727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.382 [2024-10-07 11:31:47.595747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.382 [2024-10-07 11:31:47.595780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.382 [2024-10-07 11:31:47.595813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.382 [2024-10-07 11:31:47.595832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.382 [2024-10-07 11:31:47.595847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.382 [2024-10-07 11:31:47.595878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.382 [2024-10-07 11:31:47.600151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.382 [2024-10-07 11:31:47.600267] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.382 [2024-10-07 11:31:47.600306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.382 [2024-10-07 11:31:47.600339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.382 [2024-10-07 11:31:47.600373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.382 [2024-10-07 11:31:47.600407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.382 [2024-10-07 11:31:47.600426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.382 [2024-10-07 11:31:47.600439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.382 [2024-10-07 11:31:47.600469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.382 [2024-10-07 11:31:47.606186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.382 [2024-10-07 11:31:47.606332] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.382 [2024-10-07 11:31:47.606366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.382 [2024-10-07 11:31:47.606384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.382 [2024-10-07 11:31:47.606418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.382 [2024-10-07 11:31:47.606451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.382 [2024-10-07 11:31:47.606469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.382 [2024-10-07 11:31:47.606483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.382 [2024-10-07 11:31:47.606514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.382 [2024-10-07 11:31:47.610245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.382 [2024-10-07 11:31:47.610382] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.382 [2024-10-07 11:31:47.610415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.382 [2024-10-07 11:31:47.610442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.382 [2024-10-07 11:31:47.610474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.382 [2024-10-07 11:31:47.610507] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.382 [2024-10-07 11:31:47.610525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.382 [2024-10-07 11:31:47.610539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.382 [2024-10-07 11:31:47.610570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.382 [2024-10-07 11:31:47.616661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.382 [2024-10-07 11:31:47.616777] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.382 [2024-10-07 11:31:47.616809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.382 [2024-10-07 11:31:47.616843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.382 [2024-10-07 11:31:47.617115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.382 [2024-10-07 11:31:47.617284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.382 [2024-10-07 11:31:47.617331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.382 [2024-10-07 11:31:47.617351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.382 [2024-10-07 11:31:47.617460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.382 [2024-10-07 11:31:47.620366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.382 [2024-10-07 11:31:47.620480] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.382 [2024-10-07 11:31:47.620513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.382 [2024-10-07 11:31:47.620530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.382 [2024-10-07 11:31:47.620562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.382 [2024-10-07 11:31:47.620595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.382 [2024-10-07 11:31:47.620613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.382 [2024-10-07 11:31:47.620628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.382 [2024-10-07 11:31:47.620657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.382 [2024-10-07 11:31:47.626878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.382 [2024-10-07 11:31:47.626994] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.382 [2024-10-07 11:31:47.627025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.382 [2024-10-07 11:31:47.627042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.382 [2024-10-07 11:31:47.627074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.382 [2024-10-07 11:31:47.627106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.382 [2024-10-07 11:31:47.627124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.382 [2024-10-07 11:31:47.627138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.382 [2024-10-07 11:31:47.627168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.382 [2024-10-07 11:31:47.630833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.382 [2024-10-07 11:31:47.630955] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.382 [2024-10-07 11:31:47.630986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.382 [2024-10-07 11:31:47.631004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.382 [2024-10-07 11:31:47.631255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.382 [2024-10-07 11:31:47.631436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.382 [2024-10-07 11:31:47.631491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.382 [2024-10-07 11:31:47.631511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.382 [2024-10-07 11:31:47.631620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.382 [2024-10-07 11:31:47.637710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.382 [2024-10-07 11:31:47.637825] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.382 [2024-10-07 11:31:47.637856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.382 [2024-10-07 11:31:47.637873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.382 [2024-10-07 11:31:47.637905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.382 [2024-10-07 11:31:47.637937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.383 [2024-10-07 11:31:47.637955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.383 [2024-10-07 11:31:47.637969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.383 [2024-10-07 11:31:47.638000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.383 [2024-10-07 11:31:47.640922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.383 [2024-10-07 11:31:47.641034] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.383 [2024-10-07 11:31:47.641065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.383 [2024-10-07 11:31:47.641091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.383 [2024-10-07 11:31:47.641122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.383 [2024-10-07 11:31:47.641154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.383 [2024-10-07 11:31:47.641172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.383 [2024-10-07 11:31:47.641186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.383 [2024-10-07 11:31:47.641216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.383 [2024-10-07 11:31:47.648373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.383 [2024-10-07 11:31:47.648493] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.383 [2024-10-07 11:31:47.648526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.383 [2024-10-07 11:31:47.648543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.383 [2024-10-07 11:31:47.648575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.383 [2024-10-07 11:31:47.648607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.383 [2024-10-07 11:31:47.648625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.383 [2024-10-07 11:31:47.648640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.383 [2024-10-07 11:31:47.648671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.383 [2024-10-07 11:31:47.651681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.383 [2024-10-07 11:31:47.651814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.383 [2024-10-07 11:31:47.651846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.383 [2024-10-07 11:31:47.651864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.383 [2024-10-07 11:31:47.651898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.383 [2024-10-07 11:31:47.651931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.383 [2024-10-07 11:31:47.651950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.383 [2024-10-07 11:31:47.651964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.383 [2024-10-07 11:31:47.651995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.383 [2024-10-07 11:31:47.658649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.383 [2024-10-07 11:31:47.658764] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.383 [2024-10-07 11:31:47.658796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.383 [2024-10-07 11:31:47.658813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.383 [2024-10-07 11:31:47.659071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.383 [2024-10-07 11:31:47.659231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.383 [2024-10-07 11:31:47.659293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.383 [2024-10-07 11:31:47.659310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.383 [2024-10-07 11:31:47.659434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.383 [2024-10-07 11:31:47.662164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.383 [2024-10-07 11:31:47.662274] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.383 [2024-10-07 11:31:47.662332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.383 [2024-10-07 11:31:47.662353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.383 [2024-10-07 11:31:47.662387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.383 [2024-10-07 11:31:47.662420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.383 [2024-10-07 11:31:47.662438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.383 [2024-10-07 11:31:47.662452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.383 [2024-10-07 11:31:47.662482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.383 [2024-10-07 11:31:47.668744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.383 [2024-10-07 11:31:47.668859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.383 [2024-10-07 11:31:47.668890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.383 [2024-10-07 11:31:47.668907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.383 [2024-10-07 11:31:47.668959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.383 [2024-10-07 11:31:47.668992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.383 [2024-10-07 11:31:47.669010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.383 [2024-10-07 11:31:47.669024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.383 [2024-10-07 11:31:47.669055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.383 [2024-10-07 11:31:47.672511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.383 [2024-10-07 11:31:47.672625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.383 [2024-10-07 11:31:47.672656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.383 [2024-10-07 11:31:47.672674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.383 [2024-10-07 11:31:47.672943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.383 [2024-10-07 11:31:47.673104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.383 [2024-10-07 11:31:47.673144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.383 [2024-10-07 11:31:47.673161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.383 [2024-10-07 11:31:47.673268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.383 [2024-10-07 11:31:47.679286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.383 [2024-10-07 11:31:47.679425] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.383 [2024-10-07 11:31:47.679458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.383 [2024-10-07 11:31:47.679475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.383 [2024-10-07 11:31:47.679507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.383 [2024-10-07 11:31:47.679540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.383 [2024-10-07 11:31:47.679557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.383 [2024-10-07 11:31:47.679572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.383 [2024-10-07 11:31:47.679602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.383 [2024-10-07 11:31:47.682602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.383 [2024-10-07 11:31:47.682714] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.383 [2024-10-07 11:31:47.682745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.383 [2024-10-07 11:31:47.682762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.383 [2024-10-07 11:31:47.682793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.383 [2024-10-07 11:31:47.682825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.383 [2024-10-07 11:31:47.682843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.383 [2024-10-07 11:31:47.682875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.383 [2024-10-07 11:31:47.683626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.383 [2024-10-07 11:31:47.689792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.383 [2024-10-07 11:31:47.689905] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.383 [2024-10-07 11:31:47.689937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.383 [2024-10-07 11:31:47.689954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.383 [2024-10-07 11:31:47.689986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.383 [2024-10-07 11:31:47.690017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.383 [2024-10-07 11:31:47.690035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.383 [2024-10-07 11:31:47.690050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.383 [2024-10-07 11:31:47.690080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.383 [2024-10-07 11:31:47.693056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.383 [2024-10-07 11:31:47.693168] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.383 [2024-10-07 11:31:47.693201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.383 [2024-10-07 11:31:47.693219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.383 [2024-10-07 11:31:47.693250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.383 [2024-10-07 11:31:47.693282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.383 [2024-10-07 11:31:47.693301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.383 [2024-10-07 11:31:47.693329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.383 [2024-10-07 11:31:47.693364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.383 [2024-10-07 11:31:47.700068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.383 [2024-10-07 11:31:47.700178] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.384 [2024-10-07 11:31:47.700210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.384 [2024-10-07 11:31:47.700227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.384 [2024-10-07 11:31:47.700517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.384 [2024-10-07 11:31:47.700677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.384 [2024-10-07 11:31:47.700712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.384 [2024-10-07 11:31:47.700729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.384 [2024-10-07 11:31:47.700836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.384 [2024-10-07 11:31:47.703611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.384 [2024-10-07 11:31:47.703722] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.384 [2024-10-07 11:31:47.703776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.384 [2024-10-07 11:31:47.703795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.384 [2024-10-07 11:31:47.703828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.384 [2024-10-07 11:31:47.703860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.384 [2024-10-07 11:31:47.703879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.384 [2024-10-07 11:31:47.703893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.384 [2024-10-07 11:31:47.703923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.384 [2024-10-07 11:31:47.710165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.384 [2024-10-07 11:31:47.710278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.384 [2024-10-07 11:31:47.710335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.384 [2024-10-07 11:31:47.710355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.384 [2024-10-07 11:31:47.710388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.384 [2024-10-07 11:31:47.711118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.384 [2024-10-07 11:31:47.711155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.384 [2024-10-07 11:31:47.711173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.384 [2024-10-07 11:31:47.711374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.384 [2024-10-07 11:31:47.713763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.384 [2024-10-07 11:31:47.713872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.384 [2024-10-07 11:31:47.713903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.384 [2024-10-07 11:31:47.713927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.384 [2024-10-07 11:31:47.714178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.384 [2024-10-07 11:31:47.714349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.384 [2024-10-07 11:31:47.714384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.384 [2024-10-07 11:31:47.714401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.384 [2024-10-07 11:31:47.714516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.384 [2024-10-07 11:31:47.720638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.384 [2024-10-07 11:31:47.720759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.384 [2024-10-07 11:31:47.720790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.384 [2024-10-07 11:31:47.720807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.384 [2024-10-07 11:31:47.720839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.384 [2024-10-07 11:31:47.720891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.384 [2024-10-07 11:31:47.720910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.384 [2024-10-07 11:31:47.720924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.384 [2024-10-07 11:31:47.720954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.384 [2024-10-07 11:31:47.723851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.384 [2024-10-07 11:31:47.723963] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.384 [2024-10-07 11:31:47.723993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.384 [2024-10-07 11:31:47.724010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.384 [2024-10-07 11:31:47.724041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.384 [2024-10-07 11:31:47.724074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.384 [2024-10-07 11:31:47.724092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.384 [2024-10-07 11:31:47.724106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.384 [2024-10-07 11:31:47.724136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.384 [2024-10-07 11:31:47.731180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.384 [2024-10-07 11:31:47.731306] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.384 [2024-10-07 11:31:47.731351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.384 [2024-10-07 11:31:47.731370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.384 [2024-10-07 11:31:47.731402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.384 [2024-10-07 11:31:47.731434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.384 [2024-10-07 11:31:47.731452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.384 [2024-10-07 11:31:47.731466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.384 [2024-10-07 11:31:47.731496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.384 [2024-10-07 11:31:47.734475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.384 [2024-10-07 11:31:47.734587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.384 [2024-10-07 11:31:47.734628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.384 [2024-10-07 11:31:47.734645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.384 [2024-10-07 11:31:47.734677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.384 [2024-10-07 11:31:47.734709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.384 [2024-10-07 11:31:47.734727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.384 [2024-10-07 11:31:47.734742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.384 [2024-10-07 11:31:47.734772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.384 [2024-10-07 11:31:47.741430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.384 [2024-10-07 11:31:47.741545] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.384 [2024-10-07 11:31:47.741576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.384 [2024-10-07 11:31:47.741594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.384 [2024-10-07 11:31:47.741845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.384 [2024-10-07 11:31:47.741991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.384 [2024-10-07 11:31:47.742022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.384 [2024-10-07 11:31:47.742039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.384 [2024-10-07 11:31:47.742146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.384 [2024-10-07 11:31:47.744996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.384 [2024-10-07 11:31:47.745106] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.384 [2024-10-07 11:31:47.745136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.384 [2024-10-07 11:31:47.745154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.384 [2024-10-07 11:31:47.745186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.384 [2024-10-07 11:31:47.745219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.384 [2024-10-07 11:31:47.745237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.384 [2024-10-07 11:31:47.745251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.384 [2024-10-07 11:31:47.745281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.384 [2024-10-07 11:31:47.751560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.384 [2024-10-07 11:31:47.751673] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.384 [2024-10-07 11:31:47.751705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.384 [2024-10-07 11:31:47.751722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.384 [2024-10-07 11:31:47.751754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.384 [2024-10-07 11:31:47.751786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.384 [2024-10-07 11:31:47.751804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.384 [2024-10-07 11:31:47.751818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.384 [2024-10-07 11:31:47.751848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.384 [2024-10-07 11:31:47.755479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.384 [2024-10-07 11:31:47.755590] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.384 [2024-10-07 11:31:47.755621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.384 [2024-10-07 11:31:47.755655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.384 [2024-10-07 11:31:47.755908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.384 [2024-10-07 11:31:47.756055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.384 [2024-10-07 11:31:47.756085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.384 [2024-10-07 11:31:47.756102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.384 [2024-10-07 11:31:47.756208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.384 [2024-10-07 11:31:47.762408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.384 [2024-10-07 11:31:47.762524] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.384 [2024-10-07 11:31:47.762556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.384 [2024-10-07 11:31:47.762573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.384 [2024-10-07 11:31:47.762605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.384 [2024-10-07 11:31:47.762637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.384 [2024-10-07 11:31:47.762655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.385 [2024-10-07 11:31:47.762670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.385 [2024-10-07 11:31:47.762701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.385 [2024-10-07 11:31:47.765567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.385 [2024-10-07 11:31:47.765677] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.385 [2024-10-07 11:31:47.765707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.385 [2024-10-07 11:31:47.765725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.385 [2024-10-07 11:31:47.765756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.385 [2024-10-07 11:31:47.765788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.385 [2024-10-07 11:31:47.765806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.385 [2024-10-07 11:31:47.765821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.385 [2024-10-07 11:31:47.765856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.385 [2024-10-07 11:31:47.772941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.385 [2024-10-07 11:31:47.773054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.385 [2024-10-07 11:31:47.773086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.385 [2024-10-07 11:31:47.773103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.385 [2024-10-07 11:31:47.773135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.385 [2024-10-07 11:31:47.773167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.385 [2024-10-07 11:31:47.773202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.385 [2024-10-07 11:31:47.773217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.385 [2024-10-07 11:31:47.773249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.385 [2024-10-07 11:31:47.776218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.385 [2024-10-07 11:31:47.776345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.385 [2024-10-07 11:31:47.776378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.385 [2024-10-07 11:31:47.776395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.385 [2024-10-07 11:31:47.776430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.385 [2024-10-07 11:31:47.776462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.385 [2024-10-07 11:31:47.776480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.385 [2024-10-07 11:31:47.776494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.385 [2024-10-07 11:31:47.776524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.385 [2024-10-07 11:31:47.783167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.385 [2024-10-07 11:31:47.783280] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.385 [2024-10-07 11:31:47.783311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.385 [2024-10-07 11:31:47.783344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.385 [2024-10-07 11:31:47.783596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.385 [2024-10-07 11:31:47.783741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.385 [2024-10-07 11:31:47.783791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.385 [2024-10-07 11:31:47.783808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.385 [2024-10-07 11:31:47.783915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.385 [2024-10-07 11:31:47.786758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.385 [2024-10-07 11:31:47.786868] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.385 [2024-10-07 11:31:47.786915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.385 [2024-10-07 11:31:47.786933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.385 [2024-10-07 11:31:47.786965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.385 [2024-10-07 11:31:47.786997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.385 [2024-10-07 11:31:47.787015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.385 [2024-10-07 11:31:47.787029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.385 [2024-10-07 11:31:47.787059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.385 [2024-10-07 11:31:47.793256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.385 [2024-10-07 11:31:47.793401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.385 [2024-10-07 11:31:47.793463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.385 [2024-10-07 11:31:47.793482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.385 [2024-10-07 11:31:47.793514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.385 [2024-10-07 11:31:47.793547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.385 [2024-10-07 11:31:47.793564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.385 [2024-10-07 11:31:47.793578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.385 [2024-10-07 11:31:47.793609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.385 [2024-10-07 11:31:47.796999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.385 [2024-10-07 11:31:47.797112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.385 [2024-10-07 11:31:47.797150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.385 [2024-10-07 11:31:47.797168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.385 [2024-10-07 11:31:47.797437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.385 [2024-10-07 11:31:47.797582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.385 [2024-10-07 11:31:47.797613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.385 [2024-10-07 11:31:47.797630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.385 [2024-10-07 11:31:47.797737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.385 [2024-10-07 11:31:47.803920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.385 [2024-10-07 11:31:47.804033] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.385 [2024-10-07 11:31:47.804064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.385 [2024-10-07 11:31:47.804081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.385 [2024-10-07 11:31:47.804113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.385 [2024-10-07 11:31:47.804145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.385 [2024-10-07 11:31:47.804163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.385 [2024-10-07 11:31:47.804177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.385 [2024-10-07 11:31:47.804207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.385 [2024-10-07 11:31:47.807088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.385 [2024-10-07 11:31:47.807201] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.385 [2024-10-07 11:31:47.807237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.385 [2024-10-07 11:31:47.807255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.385 [2024-10-07 11:31:47.807306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.385 [2024-10-07 11:31:47.807355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.385 [2024-10-07 11:31:47.807374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.385 [2024-10-07 11:31:47.807389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.385 [2024-10-07 11:31:47.807419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.385 [2024-10-07 11:31:47.814447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.385 [2024-10-07 11:31:47.814561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.385 [2024-10-07 11:31:47.814599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.385 [2024-10-07 11:31:47.814618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.385 [2024-10-07 11:31:47.814664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.385 [2024-10-07 11:31:47.814698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.385 [2024-10-07 11:31:47.814716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.385 [2024-10-07 11:31:47.814730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.385 [2024-10-07 11:31:47.814760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.385 [2024-10-07 11:31:47.817692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.385 [2024-10-07 11:31:47.817801] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.385 [2024-10-07 11:31:47.817831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.385 [2024-10-07 11:31:47.817848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.385 [2024-10-07 11:31:47.817880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.385 [2024-10-07 11:31:47.817919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.385 [2024-10-07 11:31:47.817937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.385 [2024-10-07 11:31:47.817951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.385 [2024-10-07 11:31:47.817981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.385 [2024-10-07 11:31:47.824757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.385 [2024-10-07 11:31:47.824869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.385 [2024-10-07 11:31:47.824900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.385 [2024-10-07 11:31:47.824917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.386 [2024-10-07 11:31:47.825169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.386 [2024-10-07 11:31:47.825345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.386 [2024-10-07 11:31:47.825388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.386 [2024-10-07 11:31:47.825420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.386 [2024-10-07 11:31:47.825532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.386 [2024-10-07 11:31:47.828268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.386 [2024-10-07 11:31:47.828394] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.386 [2024-10-07 11:31:47.828426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.386 [2024-10-07 11:31:47.828444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.386 [2024-10-07 11:31:47.828476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.386 [2024-10-07 11:31:47.828508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.386 [2024-10-07 11:31:47.828526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.386 [2024-10-07 11:31:47.828540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.386 [2024-10-07 11:31:47.828571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.386 [2024-10-07 11:31:47.834850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.386 [2024-10-07 11:31:47.834963] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.386 [2024-10-07 11:31:47.834994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.386 [2024-10-07 11:31:47.835012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.386 [2024-10-07 11:31:47.835043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.386 [2024-10-07 11:31:47.835075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.386 [2024-10-07 11:31:47.835093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.386 [2024-10-07 11:31:47.835107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.386 [2024-10-07 11:31:47.835138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.386 [2024-10-07 11:31:47.838563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.386 [2024-10-07 11:31:47.838687] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.386 [2024-10-07 11:31:47.838720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.386 [2024-10-07 11:31:47.838737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.386 [2024-10-07 11:31:47.838991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.386 [2024-10-07 11:31:47.839152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.386 [2024-10-07 11:31:47.839187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.386 [2024-10-07 11:31:47.839204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.386 [2024-10-07 11:31:47.839311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.386 [2024-10-07 11:31:47.845331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.386 [2024-10-07 11:31:47.845448] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.386 [2024-10-07 11:31:47.845496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.386 [2024-10-07 11:31:47.845516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.386 [2024-10-07 11:31:47.845548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.386 [2024-10-07 11:31:47.845580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.386 [2024-10-07 11:31:47.845598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.386 [2024-10-07 11:31:47.845613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.386 [2024-10-07 11:31:47.845644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.386 [2024-10-07 11:31:47.848656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.386 [2024-10-07 11:31:47.848771] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.386 [2024-10-07 11:31:47.848802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.386 [2024-10-07 11:31:47.848832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.386 [2024-10-07 11:31:47.848863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.386 [2024-10-07 11:31:47.849606] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.386 [2024-10-07 11:31:47.849665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.386 [2024-10-07 11:31:47.849683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.386 [2024-10-07 11:31:47.849857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.386 [2024-10-07 11:31:47.855815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.386 [2024-10-07 11:31:47.855934] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.386 [2024-10-07 11:31:47.855966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.386 [2024-10-07 11:31:47.855984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.386 [2024-10-07 11:31:47.856016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.386 [2024-10-07 11:31:47.856049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.386 [2024-10-07 11:31:47.856067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.386 [2024-10-07 11:31:47.856080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.386 [2024-10-07 11:31:47.856111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.386 [2024-10-07 11:31:47.859265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.386 [2024-10-07 11:31:47.859397] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.386 [2024-10-07 11:31:47.859430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.386 [2024-10-07 11:31:47.859448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.386 [2024-10-07 11:31:47.859487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.386 [2024-10-07 11:31:47.859539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.386 [2024-10-07 11:31:47.859559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.386 [2024-10-07 11:31:47.859573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.386 [2024-10-07 11:31:47.859604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.386 [2024-10-07 11:31:47.866034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.386 [2024-10-07 11:31:47.866151] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.386 [2024-10-07 11:31:47.866183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.386 [2024-10-07 11:31:47.866202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.386 [2024-10-07 11:31:47.866485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.386 [2024-10-07 11:31:47.866623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.386 [2024-10-07 11:31:47.866657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.386 [2024-10-07 11:31:47.866675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.386 [2024-10-07 11:31:47.866782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.386 [2024-10-07 11:31:47.869613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.386 [2024-10-07 11:31:47.869726] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.386 [2024-10-07 11:31:47.869757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.386 [2024-10-07 11:31:47.869774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.386 [2024-10-07 11:31:47.869807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.386 [2024-10-07 11:31:47.869839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.386 [2024-10-07 11:31:47.869857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.386 [2024-10-07 11:31:47.869871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.386 [2024-10-07 11:31:47.869901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.386 [2024-10-07 11:31:47.876126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.386 [2024-10-07 11:31:47.876242] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.386 [2024-10-07 11:31:47.876274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.386 [2024-10-07 11:31:47.876291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.386 [2024-10-07 11:31:47.876339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.386 [2024-10-07 11:31:47.876375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.386 [2024-10-07 11:31:47.876393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.386 [2024-10-07 11:31:47.876408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.386 [2024-10-07 11:31:47.877130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.386 [2024-10-07 11:31:47.879745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.386 [2024-10-07 11:31:47.879859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.386 [2024-10-07 11:31:47.879890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.386 [2024-10-07 11:31:47.879907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.386 [2024-10-07 11:31:47.880161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.386 [2024-10-07 11:31:47.880371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.386 [2024-10-07 11:31:47.880408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.386 [2024-10-07 11:31:47.880425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.386 [2024-10-07 11:31:47.880533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.386 [2024-10-07 11:31:47.886574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.386 [2024-10-07 11:31:47.886688] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.386 [2024-10-07 11:31:47.886719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.386 [2024-10-07 11:31:47.886736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.386 [2024-10-07 11:31:47.886767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.386 [2024-10-07 11:31:47.886800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.386 [2024-10-07 11:31:47.886824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.386 [2024-10-07 11:31:47.886838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.386 [2024-10-07 11:31:47.886869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.386 [2024-10-07 11:31:47.889836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.386 [2024-10-07 11:31:47.889944] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.386 [2024-10-07 11:31:47.889974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.386 [2024-10-07 11:31:47.889991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.386 [2024-10-07 11:31:47.890022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.386 [2024-10-07 11:31:47.890054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.387 [2024-10-07 11:31:47.890072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.387 [2024-10-07 11:31:47.890086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.387 [2024-10-07 11:31:47.890837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.387 [2024-10-07 11:31:47.897000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.387 [2024-10-07 11:31:47.897112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.387 [2024-10-07 11:31:47.897143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.387 [2024-10-07 11:31:47.897181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.387 [2024-10-07 11:31:47.897215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.387 [2024-10-07 11:31:47.897247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.387 [2024-10-07 11:31:47.897265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.387 [2024-10-07 11:31:47.897281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.387 [2024-10-07 11:31:47.897312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.387 [2024-10-07 11:31:47.900256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.387 [2024-10-07 11:31:47.900384] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.387 [2024-10-07 11:31:47.900416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.387 [2024-10-07 11:31:47.900434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.387 [2024-10-07 11:31:47.900466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.387 [2024-10-07 11:31:47.900498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.387 [2024-10-07 11:31:47.900516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.387 [2024-10-07 11:31:47.900530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.387 [2024-10-07 11:31:47.900560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.387 [2024-10-07 11:31:47.907214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.387 [2024-10-07 11:31:47.907341] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.387 [2024-10-07 11:31:47.907373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.387 [2024-10-07 11:31:47.907391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.387 [2024-10-07 11:31:47.907648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.387 [2024-10-07 11:31:47.907807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.387 [2024-10-07 11:31:47.907839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.387 [2024-10-07 11:31:47.907856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.387 [2024-10-07 11:31:47.907964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.387 [2024-10-07 11:31:47.910743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.387 [2024-10-07 11:31:47.910854] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.387 [2024-10-07 11:31:47.910885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.387 [2024-10-07 11:31:47.910902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.387 [2024-10-07 11:31:47.910934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.387 [2024-10-07 11:31:47.910967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.387 [2024-10-07 11:31:47.911002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.387 [2024-10-07 11:31:47.911018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.387 [2024-10-07 11:31:47.911049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.387 [2024-10-07 11:31:47.917304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.387 [2024-10-07 11:31:47.917441] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.387 [2024-10-07 11:31:47.917473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.387 [2024-10-07 11:31:47.917490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.387 [2024-10-07 11:31:47.917527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.387 [2024-10-07 11:31:47.917559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.387 [2024-10-07 11:31:47.917578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.387 [2024-10-07 11:31:47.917592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.387 [2024-10-07 11:31:47.918350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.387 [2024-10-07 11:31:47.921000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.387 [2024-10-07 11:31:47.921111] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.387 [2024-10-07 11:31:47.921142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.387 [2024-10-07 11:31:47.921160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.387 [2024-10-07 11:31:47.921446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.387 [2024-10-07 11:31:47.921606] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.387 [2024-10-07 11:31:47.921642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.387 [2024-10-07 11:31:47.921659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.387 [2024-10-07 11:31:47.921767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.387 [2024-10-07 11:31:47.927948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.387 [2024-10-07 11:31:47.928062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.387 [2024-10-07 11:31:47.928094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.387 [2024-10-07 11:31:47.928111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.387 [2024-10-07 11:31:47.928143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.387 [2024-10-07 11:31:47.928175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.387 [2024-10-07 11:31:47.928193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.387 [2024-10-07 11:31:47.928207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.387 [2024-10-07 11:31:47.928238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.387 [2024-10-07 11:31:47.931089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.387 [2024-10-07 11:31:47.931231] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.387 [2024-10-07 11:31:47.931262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.387 [2024-10-07 11:31:47.931279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.387 [2024-10-07 11:31:47.931311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.387 [2024-10-07 11:31:47.931361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.387 [2024-10-07 11:31:47.931380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.387 [2024-10-07 11:31:47.931394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.387 [2024-10-07 11:31:47.931424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.387 [2024-10-07 11:31:47.938574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.387 [2024-10-07 11:31:47.938688] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.387 [2024-10-07 11:31:47.938719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.387 [2024-10-07 11:31:47.938736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.387 [2024-10-07 11:31:47.938768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.387 [2024-10-07 11:31:47.938800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.387 [2024-10-07 11:31:47.938818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.387 [2024-10-07 11:31:47.938832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.387 [2024-10-07 11:31:47.938862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.387 [2024-10-07 11:31:47.941892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.387 [2024-10-07 11:31:47.942011] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.387 [2024-10-07 11:31:47.942042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.387 [2024-10-07 11:31:47.942059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.387 [2024-10-07 11:31:47.942091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.387 [2024-10-07 11:31:47.942123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.387 [2024-10-07 11:31:47.942141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.387 [2024-10-07 11:31:47.942156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.387 [2024-10-07 11:31:47.942185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.387 [2024-10-07 11:31:47.948889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.387 [2024-10-07 11:31:47.949001] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.387 [2024-10-07 11:31:47.949033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.387 [2024-10-07 11:31:47.949050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.387 [2024-10-07 11:31:47.949352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.387 [2024-10-07 11:31:47.949516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.387 [2024-10-07 11:31:47.949550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.387 [2024-10-07 11:31:47.949569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.387 [2024-10-07 11:31:47.949678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.387 [2024-10-07 11:31:47.952409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.387 [2024-10-07 11:31:47.952520] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.387 [2024-10-07 11:31:47.952551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.387 [2024-10-07 11:31:47.952567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.387 [2024-10-07 11:31:47.952599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.387 [2024-10-07 11:31:47.952631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.387 [2024-10-07 11:31:47.952649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.387 [2024-10-07 11:31:47.952663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.387 [2024-10-07 11:31:47.952692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.387 [2024-10-07 11:31:47.958978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.387 [2024-10-07 11:31:47.959091] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.387 [2024-10-07 11:31:47.959122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.387 [2024-10-07 11:31:47.959139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.387 [2024-10-07 11:31:47.959171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.387 [2024-10-07 11:31:47.959203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.387 [2024-10-07 11:31:47.959221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.387 [2024-10-07 11:31:47.959235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.387 [2024-10-07 11:31:47.959266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.387 [2024-10-07 11:31:47.962824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.387 [2024-10-07 11:31:47.962934] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.388 [2024-10-07 11:31:47.962965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.388 [2024-10-07 11:31:47.962981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.388 [2024-10-07 11:31:47.963233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.388 [2024-10-07 11:31:47.963410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.388 [2024-10-07 11:31:47.963446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.388 [2024-10-07 11:31:47.963481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.388 [2024-10-07 11:31:47.963590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.388 [2024-10-07 11:31:47.969677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.388 [2024-10-07 11:31:47.969795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.388 [2024-10-07 11:31:47.969826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.388 [2024-10-07 11:31:47.969843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.388 [2024-10-07 11:31:47.969874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.388 [2024-10-07 11:31:47.969907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.388 [2024-10-07 11:31:47.969925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.388 [2024-10-07 11:31:47.969939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.388 [2024-10-07 11:31:47.969969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.388 [2024-10-07 11:31:47.972915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.388 [2024-10-07 11:31:47.973025] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.388 [2024-10-07 11:31:47.973056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.388 [2024-10-07 11:31:47.973073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.388 [2024-10-07 11:31:47.973105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.388 [2024-10-07 11:31:47.973137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.388 [2024-10-07 11:31:47.973155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.388 [2024-10-07 11:31:47.973170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.388 [2024-10-07 11:31:47.973199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.388 [2024-10-07 11:31:47.980276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.388 [2024-10-07 11:31:47.980404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.388 [2024-10-07 11:31:47.980445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.388 [2024-10-07 11:31:47.980462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.388 [2024-10-07 11:31:47.980494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.388 [2024-10-07 11:31:47.980526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.388 [2024-10-07 11:31:47.980544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.388 [2024-10-07 11:31:47.980558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.388 [2024-10-07 11:31:47.980589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.388 [2024-10-07 11:31:47.983611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.388 [2024-10-07 11:31:47.983723] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.388 [2024-10-07 11:31:47.983772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.388 [2024-10-07 11:31:47.983791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.388 [2024-10-07 11:31:47.983823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.388 [2024-10-07 11:31:47.983857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.388 [2024-10-07 11:31:47.983875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.388 [2024-10-07 11:31:47.983889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.388 [2024-10-07 11:31:47.983919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.388 [2024-10-07 11:31:47.990631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.388 [2024-10-07 11:31:47.990744] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.388 [2024-10-07 11:31:47.990782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.388 [2024-10-07 11:31:47.990800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.388 [2024-10-07 11:31:47.991051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.388 [2024-10-07 11:31:47.991209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.388 [2024-10-07 11:31:47.991243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.388 [2024-10-07 11:31:47.991261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.388 [2024-10-07 11:31:47.991382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.388 [2024-10-07 11:31:47.994113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.388 [2024-10-07 11:31:47.994222] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.388 [2024-10-07 11:31:47.994257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.388 [2024-10-07 11:31:47.994274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.388 [2024-10-07 11:31:47.994331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.388 [2024-10-07 11:31:47.994368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.388 [2024-10-07 11:31:47.994386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.388 [2024-10-07 11:31:47.994401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.388 [2024-10-07 11:31:47.994431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.388 [2024-10-07 11:31:48.000719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.388 [2024-10-07 11:31:48.000831] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.388 [2024-10-07 11:31:48.000863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.388 [2024-10-07 11:31:48.000880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.388 [2024-10-07 11:31:48.000912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.388 [2024-10-07 11:31:48.000961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.388 [2024-10-07 11:31:48.000980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.388 [2024-10-07 11:31:48.000995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.388 [2024-10-07 11:31:48.001025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.388 [2024-10-07 11:31:48.004512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.388 [2024-10-07 11:31:48.004624] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.388 [2024-10-07 11:31:48.004655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.388 [2024-10-07 11:31:48.004672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.388 [2024-10-07 11:31:48.004938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.388 [2024-10-07 11:31:48.005099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.388 [2024-10-07 11:31:48.005133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.388 [2024-10-07 11:31:48.005151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.388 [2024-10-07 11:31:48.005258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.388 [2024-10-07 11:31:48.011369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.388 [2024-10-07 11:31:48.011483] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.388 [2024-10-07 11:31:48.011516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.388 [2024-10-07 11:31:48.011533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.388 [2024-10-07 11:31:48.011566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.388 [2024-10-07 11:31:48.011598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.388 [2024-10-07 11:31:48.011616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.388 [2024-10-07 11:31:48.011630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.388 [2024-10-07 11:31:48.011660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.388 [2024-10-07 11:31:48.014601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.388 [2024-10-07 11:31:48.014712] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.388 [2024-10-07 11:31:48.014744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.388 [2024-10-07 11:31:48.014762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.388 [2024-10-07 11:31:48.014793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.388 [2024-10-07 11:31:48.014825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.388 [2024-10-07 11:31:48.014843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.388 [2024-10-07 11:31:48.014857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.388 [2024-10-07 11:31:48.014905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.388 [2024-10-07 11:31:48.021883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.388 [2024-10-07 11:31:48.021998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.388 [2024-10-07 11:31:48.022031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.388 [2024-10-07 11:31:48.022049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.388 [2024-10-07 11:31:48.022080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.388 [2024-10-07 11:31:48.022113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.388 [2024-10-07 11:31:48.022131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.388 [2024-10-07 11:31:48.022145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.388 [2024-10-07 11:31:48.022175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.388 [2024-10-07 11:31:48.025151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.388 [2024-10-07 11:31:48.025262] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.388 [2024-10-07 11:31:48.025293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.388 [2024-10-07 11:31:48.025311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.388 [2024-10-07 11:31:48.025362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.388 [2024-10-07 11:31:48.025394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.388 [2024-10-07 11:31:48.025413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.388 [2024-10-07 11:31:48.025433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.388 [2024-10-07 11:31:48.025463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.388 [2024-10-07 11:31:48.032147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.388 [2024-10-07 11:31:48.032261] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.388 [2024-10-07 11:31:48.032292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.388 [2024-10-07 11:31:48.032309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.388 [2024-10-07 11:31:48.032580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.388 [2024-10-07 11:31:48.032740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.389 [2024-10-07 11:31:48.032774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.389 [2024-10-07 11:31:48.032792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.389 [2024-10-07 11:31:48.032899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.389 [2024-10-07 11:31:48.035690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.389 [2024-10-07 11:31:48.035801] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.389 [2024-10-07 11:31:48.035831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.389 [2024-10-07 11:31:48.035866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.389 [2024-10-07 11:31:48.035900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.389 [2024-10-07 11:31:48.035932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.389 [2024-10-07 11:31:48.035950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.389 [2024-10-07 11:31:48.035964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.389 [2024-10-07 11:31:48.035994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.389 [2024-10-07 11:31:48.042238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.389 [2024-10-07 11:31:48.042372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.389 [2024-10-07 11:31:48.042404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.389 [2024-10-07 11:31:48.042422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.389 [2024-10-07 11:31:48.042454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.389 [2024-10-07 11:31:48.042486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.389 [2024-10-07 11:31:48.042504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.389 [2024-10-07 11:31:48.042519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.389 [2024-10-07 11:31:48.042550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.389 [2024-10-07 11:31:48.045921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.389 [2024-10-07 11:31:48.046032] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.389 [2024-10-07 11:31:48.046063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.389 [2024-10-07 11:31:48.046080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.389 [2024-10-07 11:31:48.046362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.389 [2024-10-07 11:31:48.046523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.389 [2024-10-07 11:31:48.046559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.389 [2024-10-07 11:31:48.046576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.389 [2024-10-07 11:31:48.046684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.389 [2024-10-07 11:31:48.052802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.389 [2024-10-07 11:31:48.052914] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.389 [2024-10-07 11:31:48.052955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.389 [2024-10-07 11:31:48.052972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.389 [2024-10-07 11:31:48.053004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.389 [2024-10-07 11:31:48.053036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.389 [2024-10-07 11:31:48.053071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.389 [2024-10-07 11:31:48.053087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.389 [2024-10-07 11:31:48.053119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.389 [2024-10-07 11:31:48.056007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.389 [2024-10-07 11:31:48.056118] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.389 [2024-10-07 11:31:48.056149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.389 [2024-10-07 11:31:48.056166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.389 [2024-10-07 11:31:48.056198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.389 [2024-10-07 11:31:48.056230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.389 [2024-10-07 11:31:48.056248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.389 [2024-10-07 11:31:48.056263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.389 [2024-10-07 11:31:48.056292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.389 [2024-10-07 11:31:48.063340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.389 [2024-10-07 11:31:48.063453] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.389 [2024-10-07 11:31:48.063485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.389 [2024-10-07 11:31:48.063502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.389 [2024-10-07 11:31:48.063534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.389 [2024-10-07 11:31:48.063566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.389 [2024-10-07 11:31:48.063583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.389 [2024-10-07 11:31:48.063598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.389 [2024-10-07 11:31:48.063628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.389 [2024-10-07 11:31:48.066625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.389 [2024-10-07 11:31:48.066737] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.389 [2024-10-07 11:31:48.066768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.389 [2024-10-07 11:31:48.066786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.389 [2024-10-07 11:31:48.066817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.389 [2024-10-07 11:31:48.066848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.389 [2024-10-07 11:31:48.066866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.389 [2024-10-07 11:31:48.066880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.389 [2024-10-07 11:31:48.066910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.389 [2024-10-07 11:31:48.073619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.389 [2024-10-07 11:31:48.073758] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.389 [2024-10-07 11:31:48.073799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.389 [2024-10-07 11:31:48.073816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.389 [2024-10-07 11:31:48.074067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.389 [2024-10-07 11:31:48.074227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.389 [2024-10-07 11:31:48.074261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.389 [2024-10-07 11:31:48.074278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.389 [2024-10-07 11:31:48.074412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.389 [2024-10-07 11:31:48.077115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.389 [2024-10-07 11:31:48.077225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.389 [2024-10-07 11:31:48.077255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.389 [2024-10-07 11:31:48.077272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.389 [2024-10-07 11:31:48.077304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.389 [2024-10-07 11:31:48.077358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.389 [2024-10-07 11:31:48.077378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.389 [2024-10-07 11:31:48.077392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.389 [2024-10-07 11:31:48.077422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.389 [2024-10-07 11:31:48.083724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.389 [2024-10-07 11:31:48.083836] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.389 [2024-10-07 11:31:48.083867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.389 [2024-10-07 11:31:48.083884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.389 [2024-10-07 11:31:48.083916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.389 [2024-10-07 11:31:48.083948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.389 [2024-10-07 11:31:48.083965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.389 [2024-10-07 11:31:48.083980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.389 [2024-10-07 11:31:48.084729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.389 [2024-10-07 11:31:48.087355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.389 [2024-10-07 11:31:48.087465] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.389 [2024-10-07 11:31:48.087497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.389 [2024-10-07 11:31:48.087514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.389 [2024-10-07 11:31:48.087785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.389 [2024-10-07 11:31:48.087958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.389 [2024-10-07 11:31:48.087994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.389 [2024-10-07 11:31:48.088011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.389 [2024-10-07 11:31:48.088119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.389 [2024-10-07 11:31:48.094176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.389 [2024-10-07 11:31:48.094302] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.389 [2024-10-07 11:31:48.094348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.389 [2024-10-07 11:31:48.094367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.389 [2024-10-07 11:31:48.094401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.389 [2024-10-07 11:31:48.094434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.389 [2024-10-07 11:31:48.094451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.389 [2024-10-07 11:31:48.094465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.389 [2024-10-07 11:31:48.094496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.389 [2024-10-07 11:31:48.097446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.389 [2024-10-07 11:31:48.097555] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.389 [2024-10-07 11:31:48.097585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.389 [2024-10-07 11:31:48.097602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.389 [2024-10-07 11:31:48.097634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.389 [2024-10-07 11:31:48.097665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.389 [2024-10-07 11:31:48.097683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.390 [2024-10-07 11:31:48.097698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.390 [2024-10-07 11:31:48.098444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.390 [2024-10-07 11:31:48.104603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.390 [2024-10-07 11:31:48.104718] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.390 [2024-10-07 11:31:48.104749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.390 [2024-10-07 11:31:48.104767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.390 [2024-10-07 11:31:48.104798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.390 [2024-10-07 11:31:48.104830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.390 [2024-10-07 11:31:48.104848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.390 [2024-10-07 11:31:48.104879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.390 [2024-10-07 11:31:48.104913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.390 [2024-10-07 11:31:48.107898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.390 [2024-10-07 11:31:48.108010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.390 [2024-10-07 11:31:48.108041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.390 [2024-10-07 11:31:48.108058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.390 [2024-10-07 11:31:48.108090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.390 [2024-10-07 11:31:48.108123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.390 [2024-10-07 11:31:48.108141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.390 [2024-10-07 11:31:48.108155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.390 [2024-10-07 11:31:48.108187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.390 [2024-10-07 11:31:48.114868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.390 [2024-10-07 11:31:48.114983] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.390 [2024-10-07 11:31:48.115014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.390 [2024-10-07 11:31:48.115032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.390 [2024-10-07 11:31:48.115297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.390 [2024-10-07 11:31:48.115485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.390 [2024-10-07 11:31:48.115521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.390 [2024-10-07 11:31:48.115538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.390 [2024-10-07 11:31:48.115646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.390 [2024-10-07 11:31:48.118382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.390 [2024-10-07 11:31:48.118492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.390 [2024-10-07 11:31:48.118523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.390 [2024-10-07 11:31:48.118540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.390 [2024-10-07 11:31:48.118571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.390 [2024-10-07 11:31:48.118603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.390 [2024-10-07 11:31:48.118621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.390 [2024-10-07 11:31:48.118636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.390 [2024-10-07 11:31:48.118665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.390 [2024-10-07 11:31:48.124961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.390 [2024-10-07 11:31:48.125073] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.390 [2024-10-07 11:31:48.125121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.390 [2024-10-07 11:31:48.125140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.390 [2024-10-07 11:31:48.125172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.390 [2024-10-07 11:31:48.125914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.390 [2024-10-07 11:31:48.125951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.390 [2024-10-07 11:31:48.125970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.390 [2024-10-07 11:31:48.126141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.390 [2024-10-07 11:31:48.128588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.390 [2024-10-07 11:31:48.128700] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.390 [2024-10-07 11:31:48.128731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.390 [2024-10-07 11:31:48.128749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.390 [2024-10-07 11:31:48.129000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.390 [2024-10-07 11:31:48.129164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.390 [2024-10-07 11:31:48.129199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.390 [2024-10-07 11:31:48.129216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.390 [2024-10-07 11:31:48.129337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.390 [2024-10-07 11:31:48.135360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.390 [2024-10-07 11:31:48.135473] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.390 [2024-10-07 11:31:48.135505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.390 [2024-10-07 11:31:48.135522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.390 [2024-10-07 11:31:48.135553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.390 [2024-10-07 11:31:48.135585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.390 [2024-10-07 11:31:48.135603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.390 [2024-10-07 11:31:48.135617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.390 [2024-10-07 11:31:48.135647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.390 [2024-10-07 11:31:48.138678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.390 [2024-10-07 11:31:48.138790] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.390 [2024-10-07 11:31:48.138821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.390 [2024-10-07 11:31:48.138838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.390 [2024-10-07 11:31:48.138870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.390 [2024-10-07 11:31:48.139631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.390 [2024-10-07 11:31:48.139670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.390 [2024-10-07 11:31:48.139688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.390 [2024-10-07 11:31:48.139858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.390 [2024-10-07 11:31:48.145765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.390 [2024-10-07 11:31:48.145878] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.390 [2024-10-07 11:31:48.145910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.390 [2024-10-07 11:31:48.145928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.390 [2024-10-07 11:31:48.145960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.390 [2024-10-07 11:31:48.145992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.390 [2024-10-07 11:31:48.146010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.390 [2024-10-07 11:31:48.146024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.390 [2024-10-07 11:31:48.146055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.390 [2024-10-07 11:31:48.149024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.390 [2024-10-07 11:31:48.149134] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.390 [2024-10-07 11:31:48.149165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.390 [2024-10-07 11:31:48.149183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.390 [2024-10-07 11:31:48.149214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.390 [2024-10-07 11:31:48.149246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.390 [2024-10-07 11:31:48.149264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.390 [2024-10-07 11:31:48.149278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.390 [2024-10-07 11:31:48.149308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.390 [2024-10-07 11:31:48.155953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.390 [2024-10-07 11:31:48.156067] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.390 [2024-10-07 11:31:48.156098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.390 [2024-10-07 11:31:48.156116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.390 [2024-10-07 11:31:48.156394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.390 [2024-10-07 11:31:48.156541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.390 [2024-10-07 11:31:48.156573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.390 [2024-10-07 11:31:48.156590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.390 [2024-10-07 11:31:48.156719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.390 [2024-10-07 11:31:48.159543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.390 [2024-10-07 11:31:48.159655] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.390 [2024-10-07 11:31:48.159687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.390 [2024-10-07 11:31:48.159704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.390 [2024-10-07 11:31:48.159735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.390 [2024-10-07 11:31:48.159767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.390 [2024-10-07 11:31:48.159785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.390 [2024-10-07 11:31:48.159799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.390 [2024-10-07 11:31:48.159829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.390 [2024-10-07 11:31:48.166042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.390 [2024-10-07 11:31:48.166158] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.390 [2024-10-07 11:31:48.166190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.390 [2024-10-07 11:31:48.166207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.390 [2024-10-07 11:31:48.166238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.390 [2024-10-07 11:31:48.166270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.390 [2024-10-07 11:31:48.166300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.390 [2024-10-07 11:31:48.166329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.390 [2024-10-07 11:31:48.167057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.390 [2024-10-07 11:31:48.169683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.390 [2024-10-07 11:31:48.169792] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.390 [2024-10-07 11:31:48.169822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.390 [2024-10-07 11:31:48.169839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.390 [2024-10-07 11:31:48.170090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.390 [2024-10-07 11:31:48.170253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.390 [2024-10-07 11:31:48.170296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.390 [2024-10-07 11:31:48.170328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.390 [2024-10-07 11:31:48.170441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.390 [2024-10-07 11:31:48.176516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.390 [2024-10-07 11:31:48.176629] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.391 [2024-10-07 11:31:48.176661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.391 [2024-10-07 11:31:48.176695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.391 [2024-10-07 11:31:48.176729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.391 [2024-10-07 11:31:48.176762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.391 [2024-10-07 11:31:48.176780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.391 [2024-10-07 11:31:48.176795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.391 [2024-10-07 11:31:48.176825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.391 [2024-10-07 11:31:48.179768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.391 [2024-10-07 11:31:48.179880] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.391 [2024-10-07 11:31:48.179910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.391 [2024-10-07 11:31:48.179927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.391 [2024-10-07 11:31:48.179959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.391 [2024-10-07 11:31:48.179991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.391 [2024-10-07 11:31:48.180009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.391 [2024-10-07 11:31:48.180023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.391 [2024-10-07 11:31:48.180764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.391 [2024-10-07 11:31:48.187004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.391 [2024-10-07 11:31:48.187117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.391 [2024-10-07 11:31:48.187148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.391 [2024-10-07 11:31:48.187165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.391 [2024-10-07 11:31:48.187198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.391 [2024-10-07 11:31:48.187230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.391 [2024-10-07 11:31:48.187248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.391 [2024-10-07 11:31:48.187262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.391 [2024-10-07 11:31:48.187291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.391 [2024-10-07 11:31:48.190239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.391 [2024-10-07 11:31:48.190372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.391 [2024-10-07 11:31:48.190404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.391 [2024-10-07 11:31:48.190422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.391 [2024-10-07 11:31:48.190454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.391 [2024-10-07 11:31:48.190486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.391 [2024-10-07 11:31:48.190522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.391 [2024-10-07 11:31:48.190537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.391 [2024-10-07 11:31:48.190569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.391 [2024-10-07 11:31:48.197153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.391 [2024-10-07 11:31:48.197267] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.391 [2024-10-07 11:31:48.197298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.391 [2024-10-07 11:31:48.197329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.391 [2024-10-07 11:31:48.197585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.391 [2024-10-07 11:31:48.197751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.391 [2024-10-07 11:31:48.197777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.391 [2024-10-07 11:31:48.197792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.391 [2024-10-07 11:31:48.197899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.391 [2024-10-07 11:31:48.200744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.391 [2024-10-07 11:31:48.200856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.391 [2024-10-07 11:31:48.200887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.391 [2024-10-07 11:31:48.200904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.391 [2024-10-07 11:31:48.200935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.391 [2024-10-07 11:31:48.200968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.391 [2024-10-07 11:31:48.200986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.391 [2024-10-07 11:31:48.201001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.391 [2024-10-07 11:31:48.201030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.391 [2024-10-07 11:31:48.207244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.391 [2024-10-07 11:31:48.207369] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.391 [2024-10-07 11:31:48.207401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.391 [2024-10-07 11:31:48.207419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.391 [2024-10-07 11:31:48.207451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.391 [2024-10-07 11:31:48.207483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.391 [2024-10-07 11:31:48.207501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.391 [2024-10-07 11:31:48.207515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.391 [2024-10-07 11:31:48.207545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.391 [2024-10-07 11:31:48.210904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.391 [2024-10-07 11:31:48.211034] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.391 [2024-10-07 11:31:48.211066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.391 [2024-10-07 11:31:48.211084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.391 [2024-10-07 11:31:48.211351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.391 [2024-10-07 11:31:48.211487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.391 [2024-10-07 11:31:48.211521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.391 [2024-10-07 11:31:48.211538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.391 [2024-10-07 11:31:48.211645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.391 [2024-10-07 11:31:48.217764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.391 [2024-10-07 11:31:48.217889] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.391 [2024-10-07 11:31:48.217921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.391 [2024-10-07 11:31:48.217938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.391 [2024-10-07 11:31:48.217970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.391 [2024-10-07 11:31:48.218002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.391 [2024-10-07 11:31:48.218020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.391 [2024-10-07 11:31:48.218035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.391 [2024-10-07 11:31:48.218065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.391 [2024-10-07 11:31:48.221011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.391 [2024-10-07 11:31:48.221121] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.391 [2024-10-07 11:31:48.221155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.391 [2024-10-07 11:31:48.221172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.391 [2024-10-07 11:31:48.221203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.391 [2024-10-07 11:31:48.221235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.391 [2024-10-07 11:31:48.221253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.391 [2024-10-07 11:31:48.221268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.391 [2024-10-07 11:31:48.221297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.391 [2024-10-07 11:31:48.228277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.391 [2024-10-07 11:31:48.228404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.391 [2024-10-07 11:31:48.228436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.391 [2024-10-07 11:31:48.228454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.391 [2024-10-07 11:31:48.228504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.391 [2024-10-07 11:31:48.228538] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.391 [2024-10-07 11:31:48.228555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.391 [2024-10-07 11:31:48.228570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.391 [2024-10-07 11:31:48.228601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.391 [2024-10-07 11:31:48.231554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.391 [2024-10-07 11:31:48.231668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.391 [2024-10-07 11:31:48.231698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.391 [2024-10-07 11:31:48.231716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.391 [2024-10-07 11:31:48.231748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.391 [2024-10-07 11:31:48.231783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.391 [2024-10-07 11:31:48.231801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.391 [2024-10-07 11:31:48.231815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.391 [2024-10-07 11:31:48.231845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.391 [2024-10-07 11:31:48.238545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.391 [2024-10-07 11:31:48.238659] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.391 [2024-10-07 11:31:48.238690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.391 [2024-10-07 11:31:48.238708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.391 [2024-10-07 11:31:48.238959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.391 [2024-10-07 11:31:48.239105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.391 [2024-10-07 11:31:48.239138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.391 [2024-10-07 11:31:48.239154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.391 [2024-10-07 11:31:48.239262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.391 [2024-10-07 11:31:48.242128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.391 [2024-10-07 11:31:48.242238] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.391 [2024-10-07 11:31:48.242269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.391 [2024-10-07 11:31:48.242297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.391 [2024-10-07 11:31:48.242346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.391 [2024-10-07 11:31:48.242382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.391 [2024-10-07 11:31:48.242401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.391 [2024-10-07 11:31:48.242432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.391 [2024-10-07 11:31:48.242464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.391 [2024-10-07 11:31:48.248632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.391 [2024-10-07 11:31:48.248744] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.391 [2024-10-07 11:31:48.248776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.391 [2024-10-07 11:31:48.248793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.391 [2024-10-07 11:31:48.248825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.392 [2024-10-07 11:31:48.248857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.392 [2024-10-07 11:31:48.248875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.392 [2024-10-07 11:31:48.248890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.392 [2024-10-07 11:31:48.248920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.392 [2024-10-07 11:31:48.252561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.392 [2024-10-07 11:31:48.252672] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.392 [2024-10-07 11:31:48.252703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.392 [2024-10-07 11:31:48.252721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.392 [2024-10-07 11:31:48.252972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.392 [2024-10-07 11:31:48.253158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.392 [2024-10-07 11:31:48.253193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.392 [2024-10-07 11:31:48.253216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.392 [2024-10-07 11:31:48.253337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.392 [2024-10-07 11:31:48.259485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.392 [2024-10-07 11:31:48.259604] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.392 [2024-10-07 11:31:48.259636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.392 [2024-10-07 11:31:48.259653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.392 [2024-10-07 11:31:48.259685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.392 [2024-10-07 11:31:48.259717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.392 [2024-10-07 11:31:48.259735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.392 [2024-10-07 11:31:48.259750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.392 [2024-10-07 11:31:48.259780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.392 [2024-10-07 11:31:48.262651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.392 [2024-10-07 11:31:48.262759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.392 [2024-10-07 11:31:48.262806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.392 [2024-10-07 11:31:48.262825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.392 [2024-10-07 11:31:48.262858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.392 [2024-10-07 11:31:48.262890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.392 [2024-10-07 11:31:48.262908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.392 [2024-10-07 11:31:48.262922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.392 [2024-10-07 11:31:48.262953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.392 [2024-10-07 11:31:48.270027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.392 [2024-10-07 11:31:48.270145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.392 [2024-10-07 11:31:48.270177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.392 [2024-10-07 11:31:48.270195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.392 [2024-10-07 11:31:48.270228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.392 [2024-10-07 11:31:48.270260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.392 [2024-10-07 11:31:48.270278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.392 [2024-10-07 11:31:48.270305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.392 [2024-10-07 11:31:48.270355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.392 [2024-10-07 11:31:48.273341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.392 [2024-10-07 11:31:48.273452] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.392 [2024-10-07 11:31:48.273483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.392 [2024-10-07 11:31:48.273501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.392 [2024-10-07 11:31:48.273533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.392 [2024-10-07 11:31:48.273565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.392 [2024-10-07 11:31:48.273583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.392 [2024-10-07 11:31:48.273598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.392 [2024-10-07 11:31:48.273628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.392 [2024-10-07 11:31:48.280451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.392 [2024-10-07 11:31:48.280562] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.392 [2024-10-07 11:31:48.280594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.392 [2024-10-07 11:31:48.280611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.392 [2024-10-07 11:31:48.280862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.392 [2024-10-07 11:31:48.281039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.392 [2024-10-07 11:31:48.281074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.392 [2024-10-07 11:31:48.281092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.392 [2024-10-07 11:31:48.281200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.392 [2024-10-07 11:31:48.283958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.392 [2024-10-07 11:31:48.284069] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.392 [2024-10-07 11:31:48.284100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.392 [2024-10-07 11:31:48.284117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.392 [2024-10-07 11:31:48.284148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.392 [2024-10-07 11:31:48.284180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.392 [2024-10-07 11:31:48.284198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.392 [2024-10-07 11:31:48.284212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.392 [2024-10-07 11:31:48.284243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.392 [2024-10-07 11:31:48.290538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.392 [2024-10-07 11:31:48.290651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.392 [2024-10-07 11:31:48.290682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.392 [2024-10-07 11:31:48.290699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.392 [2024-10-07 11:31:48.290731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.392 [2024-10-07 11:31:48.290763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.392 [2024-10-07 11:31:48.290781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.392 [2024-10-07 11:31:48.290795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.392 [2024-10-07 11:31:48.290825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.392 [2024-10-07 11:31:48.294337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.392 [2024-10-07 11:31:48.294448] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.392 [2024-10-07 11:31:48.294480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.392 [2024-10-07 11:31:48.294497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.392 [2024-10-07 11:31:48.294756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.392 [2024-10-07 11:31:48.294915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.392 [2024-10-07 11:31:48.294949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.392 [2024-10-07 11:31:48.294966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.392 [2024-10-07 11:31:48.295091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.392 [2024-10-07 11:31:48.301184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.392 [2024-10-07 11:31:48.301298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.392 [2024-10-07 11:31:48.301344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.392 [2024-10-07 11:31:48.301363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.392 [2024-10-07 11:31:48.301396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.392 [2024-10-07 11:31:48.301428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.392 [2024-10-07 11:31:48.301446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.392 [2024-10-07 11:31:48.301460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.392 [2024-10-07 11:31:48.301490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.392 [2024-10-07 11:31:48.304424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.392 [2024-10-07 11:31:48.304534] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.392 [2024-10-07 11:31:48.304565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.392 [2024-10-07 11:31:48.304582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.392 [2024-10-07 11:31:48.304614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.392 [2024-10-07 11:31:48.304646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.392 [2024-10-07 11:31:48.304663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.392 [2024-10-07 11:31:48.304678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.392 [2024-10-07 11:31:48.304707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.392 [2024-10-07 11:31:48.311757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.392 [2024-10-07 11:31:48.311873] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.392 [2024-10-07 11:31:48.311905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.392 [2024-10-07 11:31:48.311922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.392 [2024-10-07 11:31:48.311954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.392 [2024-10-07 11:31:48.311986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.392 [2024-10-07 11:31:48.312007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.392 [2024-10-07 11:31:48.312022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.392 [2024-10-07 11:31:48.312052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.392 [2024-10-07 11:31:48.315086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.392 [2024-10-07 11:31:48.315199] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.392 [2024-10-07 11:31:48.315230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.392 [2024-10-07 11:31:48.315264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.392 [2024-10-07 11:31:48.315298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.392 [2024-10-07 11:31:48.315347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.392 [2024-10-07 11:31:48.315368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.392 [2024-10-07 11:31:48.315382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.393 [2024-10-07 11:31:48.315412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.393 [2024-10-07 11:31:48.322073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.393 [2024-10-07 11:31:48.322190] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.393 [2024-10-07 11:31:48.322221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.393 [2024-10-07 11:31:48.322239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.393 [2024-10-07 11:31:48.322535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.393 [2024-10-07 11:31:48.322706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.393 [2024-10-07 11:31:48.322741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.393 [2024-10-07 11:31:48.322759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.393 [2024-10-07 11:31:48.322867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.393 [2024-10-07 11:31:48.325583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.393 [2024-10-07 11:31:48.325694] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.393 [2024-10-07 11:31:48.325725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.393 [2024-10-07 11:31:48.325743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.393 [2024-10-07 11:31:48.325775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.393 [2024-10-07 11:31:48.325806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.393 [2024-10-07 11:31:48.325825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.393 [2024-10-07 11:31:48.325839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.393 [2024-10-07 11:31:48.325869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.393 [2024-10-07 11:31:48.332165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.393 [2024-10-07 11:31:48.332275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.393 [2024-10-07 11:31:48.332306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.393 [2024-10-07 11:31:48.332339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.393 [2024-10-07 11:31:48.332373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.393 [2024-10-07 11:31:48.332405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.393 [2024-10-07 11:31:48.332440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.393 [2024-10-07 11:31:48.332456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.393 [2024-10-07 11:31:48.332488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.393 [2024-10-07 11:31:48.335887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.393 [2024-10-07 11:31:48.336001] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.393 [2024-10-07 11:31:48.336033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.393 [2024-10-07 11:31:48.336050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.393 [2024-10-07 11:31:48.336330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.393 [2024-10-07 11:31:48.336492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.393 [2024-10-07 11:31:48.336527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.393 [2024-10-07 11:31:48.336544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.393 [2024-10-07 11:31:48.336651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.393 [2024-10-07 11:31:48.342701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.393 [2024-10-07 11:31:48.342817] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.393 [2024-10-07 11:31:48.342848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.393 [2024-10-07 11:31:48.342865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.393 [2024-10-07 11:31:48.342896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.393 [2024-10-07 11:31:48.342928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.393 [2024-10-07 11:31:48.342946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.393 [2024-10-07 11:31:48.342960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.393 [2024-10-07 11:31:48.342991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.393 [2024-10-07 11:31:48.345977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.393 [2024-10-07 11:31:48.346085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.393 [2024-10-07 11:31:48.346115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.393 [2024-10-07 11:31:48.346132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.393 [2024-10-07 11:31:48.346164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.393 [2024-10-07 11:31:48.346196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.393 [2024-10-07 11:31:48.346214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.393 [2024-10-07 11:31:48.346228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.393 [2024-10-07 11:31:48.346259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.393 [2024-10-07 11:31:48.353201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.393 [2024-10-07 11:31:48.353344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.393 [2024-10-07 11:31:48.353378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.393 [2024-10-07 11:31:48.353395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.393 [2024-10-07 11:31:48.353428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.393 [2024-10-07 11:31:48.353460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.393 [2024-10-07 11:31:48.353478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.393 [2024-10-07 11:31:48.353492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.393 [2024-10-07 11:31:48.353524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.393 [2024-10-07 11:31:48.356469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.393 [2024-10-07 11:31:48.356581] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.393 [2024-10-07 11:31:48.356612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.393 [2024-10-07 11:31:48.356629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.393 [2024-10-07 11:31:48.356661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.393 [2024-10-07 11:31:48.356693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.393 [2024-10-07 11:31:48.356711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.393 [2024-10-07 11:31:48.356725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.393 [2024-10-07 11:31:48.356755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.393 [2024-10-07 11:31:48.363504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.393 [2024-10-07 11:31:48.363617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.393 [2024-10-07 11:31:48.363648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.393 [2024-10-07 11:31:48.363666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.393 [2024-10-07 11:31:48.363917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.393 [2024-10-07 11:31:48.364078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.393 [2024-10-07 11:31:48.364114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.393 [2024-10-07 11:31:48.364131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.393 [2024-10-07 11:31:48.364239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.393 [2024-10-07 11:31:48.367016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.393 [2024-10-07 11:31:48.367126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.393 [2024-10-07 11:31:48.367157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.393 [2024-10-07 11:31:48.367174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.393 [2024-10-07 11:31:48.367221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.393 [2024-10-07 11:31:48.367254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.393 [2024-10-07 11:31:48.367272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.393 [2024-10-07 11:31:48.367286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.393 [2024-10-07 11:31:48.367332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.393 [2024-10-07 11:31:48.373593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.393 [2024-10-07 11:31:48.373699] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.393 [2024-10-07 11:31:48.373730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.393 [2024-10-07 11:31:48.373747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.393 [2024-10-07 11:31:48.373778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.393 [2024-10-07 11:31:48.373811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.393 [2024-10-07 11:31:48.373829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.393 [2024-10-07 11:31:48.373842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.393 [2024-10-07 11:31:48.373873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.393 [2024-10-07 11:31:48.377289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.393 [2024-10-07 11:31:48.377412] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.393 [2024-10-07 11:31:48.377444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.393 [2024-10-07 11:31:48.377461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.393 [2024-10-07 11:31:48.377712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.393 [2024-10-07 11:31:48.377846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.393 [2024-10-07 11:31:48.377879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.393 [2024-10-07 11:31:48.377897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.393 [2024-10-07 11:31:48.378004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.393 [2024-10-07 11:31:48.384166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.393 [2024-10-07 11:31:48.384280] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.393 [2024-10-07 11:31:48.384312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.393 [2024-10-07 11:31:48.384345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.393 [2024-10-07 11:31:48.384377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.393 [2024-10-07 11:31:48.384410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.393 [2024-10-07 11:31:48.384428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.393 [2024-10-07 11:31:48.384459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.393 [2024-10-07 11:31:48.384492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.393 [2024-10-07 11:31:48.387387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.393 [2024-10-07 11:31:48.387500] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.393 [2024-10-07 11:31:48.387531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.393 [2024-10-07 11:31:48.387548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.393 [2024-10-07 11:31:48.387580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.393 [2024-10-07 11:31:48.387612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.393 [2024-10-07 11:31:48.387630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.393 [2024-10-07 11:31:48.387644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.393 [2024-10-07 11:31:48.387674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.393 [2024-10-07 11:31:48.394820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.393 [2024-10-07 11:31:48.394934] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.393 [2024-10-07 11:31:48.394965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.393 [2024-10-07 11:31:48.394983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.393 [2024-10-07 11:31:48.395015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.394 [2024-10-07 11:31:48.395047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.394 [2024-10-07 11:31:48.395064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.394 [2024-10-07 11:31:48.395079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.394 [2024-10-07 11:31:48.395110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.394 [2024-10-07 11:31:48.398061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.394 [2024-10-07 11:31:48.398171] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.394 [2024-10-07 11:31:48.398202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.394 [2024-10-07 11:31:48.398219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.394 [2024-10-07 11:31:48.398250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.394 [2024-10-07 11:31:48.398282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.394 [2024-10-07 11:31:48.398314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.394 [2024-10-07 11:31:48.398347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.394 [2024-10-07 11:31:48.398380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.394 [2024-10-07 11:31:48.405102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.394 [2024-10-07 11:31:48.405218] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.394 [2024-10-07 11:31:48.405266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.394 [2024-10-07 11:31:48.405285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.394 [2024-10-07 11:31:48.405553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.394 [2024-10-07 11:31:48.405689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.394 [2024-10-07 11:31:48.405723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.394 [2024-10-07 11:31:48.405740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.394 [2024-10-07 11:31:48.405848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.394 [2024-10-07 11:31:48.408688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.394 [2024-10-07 11:31:48.408804] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.394 [2024-10-07 11:31:48.408835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.394 [2024-10-07 11:31:48.408852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.394 [2024-10-07 11:31:48.408885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.394 [2024-10-07 11:31:48.408917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.394 [2024-10-07 11:31:48.408935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.394 [2024-10-07 11:31:48.408949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.394 [2024-10-07 11:31:48.408979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.394 [2024-10-07 11:31:48.415195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.394 [2024-10-07 11:31:48.415309] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.394 [2024-10-07 11:31:48.415356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.394 [2024-10-07 11:31:48.415374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.394 [2024-10-07 11:31:48.415406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.394 [2024-10-07 11:31:48.415438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.394 [2024-10-07 11:31:48.415456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.394 [2024-10-07 11:31:48.415471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.394 [2024-10-07 11:31:48.415502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.394 [2024-10-07 11:31:48.418858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.394 [2024-10-07 11:31:48.418971] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.394 [2024-10-07 11:31:48.419002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.394 [2024-10-07 11:31:48.419020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.394 [2024-10-07 11:31:48.419270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.394 [2024-10-07 11:31:48.419443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.394 [2024-10-07 11:31:48.419478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.394 [2024-10-07 11:31:48.419495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.394 [2024-10-07 11:31:48.419603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.394 [2024-10-07 11:31:48.425727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.394 [2024-10-07 11:31:48.425841] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.394 [2024-10-07 11:31:48.425872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.394 [2024-10-07 11:31:48.425890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.394 [2024-10-07 11:31:48.425922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.394 [2024-10-07 11:31:48.425953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.394 [2024-10-07 11:31:48.425971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.394 [2024-10-07 11:31:48.425985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.394 [2024-10-07 11:31:48.426016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.394 [2024-10-07 11:31:48.428956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.394 [2024-10-07 11:31:48.429066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.394 [2024-10-07 11:31:48.429097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.394 [2024-10-07 11:31:48.429115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.394 [2024-10-07 11:31:48.429146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.394 [2024-10-07 11:31:48.429178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.394 [2024-10-07 11:31:48.429196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.394 [2024-10-07 11:31:48.429210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.394 [2024-10-07 11:31:48.429947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.394 [2024-10-07 11:31:48.436171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.394 [2024-10-07 11:31:48.436285] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.394 [2024-10-07 11:31:48.436343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.394 [2024-10-07 11:31:48.436363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.394 [2024-10-07 11:31:48.436396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.394 [2024-10-07 11:31:48.436428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.394 [2024-10-07 11:31:48.436446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.394 [2024-10-07 11:31:48.436460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.394 [2024-10-07 11:31:48.436509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.394 [2024-10-07 11:31:48.439500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.394 [2024-10-07 11:31:48.439615] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.394 [2024-10-07 11:31:48.439647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.394 [2024-10-07 11:31:48.439664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.394 [2024-10-07 11:31:48.439697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.394 [2024-10-07 11:31:48.439729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.394 [2024-10-07 11:31:48.439746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.394 [2024-10-07 11:31:48.439761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.394 [2024-10-07 11:31:48.439791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.394 [2024-10-07 11:31:48.446472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.394 [2024-10-07 11:31:48.446586] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.394 [2024-10-07 11:31:48.446618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.394 [2024-10-07 11:31:48.446635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.394 [2024-10-07 11:31:48.446901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.394 [2024-10-07 11:31:48.447055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.394 [2024-10-07 11:31:48.447087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.394 [2024-10-07 11:31:48.447105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.394 [2024-10-07 11:31:48.447212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.394 [2024-10-07 11:31:48.450047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.394 [2024-10-07 11:31:48.450156] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.394 [2024-10-07 11:31:48.450187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.394 [2024-10-07 11:31:48.450204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.394 [2024-10-07 11:31:48.450236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.394 [2024-10-07 11:31:48.450268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.394 [2024-10-07 11:31:48.450300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.394 [2024-10-07 11:31:48.450331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.394 [2024-10-07 11:31:48.450367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.394 [2024-10-07 11:31:48.456563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.394 [2024-10-07 11:31:48.456675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.394 [2024-10-07 11:31:48.456706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.394 [2024-10-07 11:31:48.456739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.394 [2024-10-07 11:31:48.456774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.394 [2024-10-07 11:31:48.456807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.394 [2024-10-07 11:31:48.456825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.394 [2024-10-07 11:31:48.456838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.394 [2024-10-07 11:31:48.457576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.394 [2024-10-07 11:31:48.460209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.394 [2024-10-07 11:31:48.460333] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.394 [2024-10-07 11:31:48.460365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.394 [2024-10-07 11:31:48.460382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.394 [2024-10-07 11:31:48.460633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.394 [2024-10-07 11:31:48.460778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.394 [2024-10-07 11:31:48.460810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.394 [2024-10-07 11:31:48.460826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.394 [2024-10-07 11:31:48.460933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.394 [2024-10-07 11:31:48.467004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.394 [2024-10-07 11:31:48.467118] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.394 [2024-10-07 11:31:48.467150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.394 [2024-10-07 11:31:48.467167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.394 [2024-10-07 11:31:48.467199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.394 [2024-10-07 11:31:48.467232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.394 [2024-10-07 11:31:48.467250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.394 [2024-10-07 11:31:48.467264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.394 [2024-10-07 11:31:48.467294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.394 [2024-10-07 11:31:48.470312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.394 [2024-10-07 11:31:48.470436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.394 [2024-10-07 11:31:48.470467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.394 [2024-10-07 11:31:48.470484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.394 [2024-10-07 11:31:48.470516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.394 [2024-10-07 11:31:48.470547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.395 [2024-10-07 11:31:48.470583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.395 [2024-10-07 11:31:48.470598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.395 [2024-10-07 11:31:48.471337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.395 [2024-10-07 11:31:48.477476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.395 [2024-10-07 11:31:48.477595] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.395 [2024-10-07 11:31:48.477627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.395 [2024-10-07 11:31:48.477644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.395 [2024-10-07 11:31:48.477676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.395 [2024-10-07 11:31:48.477708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.395 [2024-10-07 11:31:48.477726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.395 [2024-10-07 11:31:48.477740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.395 [2024-10-07 11:31:48.477770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.395 [2024-10-07 11:31:48.481050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.395 [2024-10-07 11:31:48.481213] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.395 [2024-10-07 11:31:48.481253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.395 [2024-10-07 11:31:48.481278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.395 [2024-10-07 11:31:48.481338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.395 [2024-10-07 11:31:48.481398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.395 [2024-10-07 11:31:48.481424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.395 [2024-10-07 11:31:48.481444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.395 [2024-10-07 11:31:48.481487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.395 [2024-10-07 11:31:48.487702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.395 [2024-10-07 11:31:48.487825] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.395 [2024-10-07 11:31:48.487858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.395 [2024-10-07 11:31:48.487876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.395 [2024-10-07 11:31:48.488129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.395 [2024-10-07 11:31:48.488309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.395 [2024-10-07 11:31:48.488358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.395 [2024-10-07 11:31:48.488375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.395 [2024-10-07 11:31:48.488486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.395 [2024-10-07 11:31:48.491699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.395 [2024-10-07 11:31:48.491859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.395 [2024-10-07 11:31:48.491904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.395 [2024-10-07 11:31:48.491931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.395 [2024-10-07 11:31:48.491976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.395 [2024-10-07 11:31:48.492021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.395 [2024-10-07 11:31:48.492048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.395 [2024-10-07 11:31:48.492071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.395 [2024-10-07 11:31:48.493052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.395 [2024-10-07 11:31:48.501026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.395 8920.92 IOPS, 34.85 MiB/s [2024-10-07T11:31:52.918Z] [2024-10-07 11:31:48.501248] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.395 [2024-10-07 11:31:48.501284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.395 [2024-10-07 11:31:48.501302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.395 [2024-10-07 11:31:48.502629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.395 [2024-10-07 11:31:48.503588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.395 [2024-10-07 11:31:48.503629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.395 [2024-10-07 11:31:48.503648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.395 [2024-10-07 11:31:48.503843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.395 [2024-10-07 11:31:48.503874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.395 [2024-10-07 11:31:48.504032] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.395 [2024-10-07 11:31:48.504064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.395 [2024-10-07 11:31:48.504081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.395 [2024-10-07 11:31:48.505374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.395 [2024-10-07 11:31:48.506156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.395 [2024-10-07 11:31:48.506195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.395 [2024-10-07 11:31:48.506213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.395 [2024-10-07 11:31:48.507113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.395 [2024-10-07 11:31:48.511370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.395 [2024-10-07 11:31:48.511485] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.395 [2024-10-07 11:31:48.511517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.395 [2024-10-07 11:31:48.511535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.395 [2024-10-07 11:31:48.511587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.395 [2024-10-07 11:31:48.511620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.395 [2024-10-07 11:31:48.511639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.395 [2024-10-07 11:31:48.511653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.395 [2024-10-07 11:31:48.511684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.395 [2024-10-07 11:31:48.515136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.395 [2024-10-07 11:31:48.515254] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.395 [2024-10-07 11:31:48.515288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.395 [2024-10-07 11:31:48.515306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.395 [2024-10-07 11:31:48.515573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.395 [2024-10-07 11:31:48.515722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.395 [2024-10-07 11:31:48.515759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.395 [2024-10-07 11:31:48.515776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.395 [2024-10-07 11:31:48.515884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.395 [2024-10-07 11:31:48.521896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.395 [2024-10-07 11:31:48.522013] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.395 [2024-10-07 11:31:48.522045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.395 [2024-10-07 11:31:48.522062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.395 [2024-10-07 11:31:48.522095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.395 [2024-10-07 11:31:48.522127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.395 [2024-10-07 11:31:48.522148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.395 [2024-10-07 11:31:48.522162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.395 [2024-10-07 11:31:48.522193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.395 [2024-10-07 11:31:48.525229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.395 [2024-10-07 11:31:48.525351] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.395 [2024-10-07 11:31:48.525384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.395 [2024-10-07 11:31:48.525401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.395 [2024-10-07 11:31:48.525434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.395 [2024-10-07 11:31:48.525466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.395 [2024-10-07 11:31:48.525484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.395 [2024-10-07 11:31:48.525514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.395 [2024-10-07 11:31:48.526239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.395 [2024-10-07 11:31:48.532385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.395 [2024-10-07 11:31:48.532502] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.395 [2024-10-07 11:31:48.532534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.395 [2024-10-07 11:31:48.532552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.395 [2024-10-07 11:31:48.532584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.395 [2024-10-07 11:31:48.532616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.395 [2024-10-07 11:31:48.532634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.395 [2024-10-07 11:31:48.532648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.395 [2024-10-07 11:31:48.532679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.395 [2024-10-07 11:31:48.535656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.395 [2024-10-07 11:31:48.535767] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.395 [2024-10-07 11:31:48.535799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.395 [2024-10-07 11:31:48.535816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.395 [2024-10-07 11:31:48.535848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.395 [2024-10-07 11:31:48.535883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.395 [2024-10-07 11:31:48.535902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.395 [2024-10-07 11:31:48.535916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.395 [2024-10-07 11:31:48.535946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.395 [2024-10-07 11:31:48.542564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.395 [2024-10-07 11:31:48.542680] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.395 [2024-10-07 11:31:48.542712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.395 [2024-10-07 11:31:48.542729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.395 [2024-10-07 11:31:48.542982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.395 [2024-10-07 11:31:48.543129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.395 [2024-10-07 11:31:48.543164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.395 [2024-10-07 11:31:48.543181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.395 [2024-10-07 11:31:48.543290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.395 [2024-10-07 11:31:48.546041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.395 [2024-10-07 11:31:48.546168] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.395 [2024-10-07 11:31:48.546200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.395 [2024-10-07 11:31:48.546217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.396 [2024-10-07 11:31:48.546249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.396 [2024-10-07 11:31:48.546281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.396 [2024-10-07 11:31:48.546330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.396 [2024-10-07 11:31:48.546348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.396 [2024-10-07 11:31:48.546382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.396 [2024-10-07 11:31:48.552655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.396 [2024-10-07 11:31:48.552769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.396 [2024-10-07 11:31:48.552800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.396 [2024-10-07 11:31:48.552818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.396 [2024-10-07 11:31:48.552849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.396 [2024-10-07 11:31:48.552881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.396 [2024-10-07 11:31:48.552899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.396 [2024-10-07 11:31:48.552915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.396 [2024-10-07 11:31:48.553652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.396 [2024-10-07 11:31:48.556265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.396 [2024-10-07 11:31:48.556399] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.396 [2024-10-07 11:31:48.556431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.396 [2024-10-07 11:31:48.556448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.396 [2024-10-07 11:31:48.556715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.396 [2024-10-07 11:31:48.556870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.396 [2024-10-07 11:31:48.556905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.396 [2024-10-07 11:31:48.556922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.396 [2024-10-07 11:31:48.557033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.396 [2024-10-07 11:31:48.563067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.396 [2024-10-07 11:31:48.563182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.396 [2024-10-07 11:31:48.563214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.396 [2024-10-07 11:31:48.563231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.396 [2024-10-07 11:31:48.563263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.396 [2024-10-07 11:31:48.563314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.396 [2024-10-07 11:31:48.563350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.396 [2024-10-07 11:31:48.563365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.396 [2024-10-07 11:31:48.563396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.396 [2024-10-07 11:31:48.566368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.396 [2024-10-07 11:31:48.566478] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.396 [2024-10-07 11:31:48.566510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.396 [2024-10-07 11:31:48.566527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.396 [2024-10-07 11:31:48.566559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.396 [2024-10-07 11:31:48.566591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.396 [2024-10-07 11:31:48.566609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.396 [2024-10-07 11:31:48.566624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.396 [2024-10-07 11:31:48.567361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.396 [2024-10-07 11:31:48.573521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.396 [2024-10-07 11:31:48.573635] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.396 [2024-10-07 11:31:48.573667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.396 [2024-10-07 11:31:48.573685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.396 [2024-10-07 11:31:48.573717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.396 [2024-10-07 11:31:48.573749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.396 [2024-10-07 11:31:48.573767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.396 [2024-10-07 11:31:48.573782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.396 [2024-10-07 11:31:48.573812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.396 [2024-10-07 11:31:48.576803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.396 [2024-10-07 11:31:48.576915] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.396 [2024-10-07 11:31:48.576947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.396 [2024-10-07 11:31:48.576964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.396 [2024-10-07 11:31:48.576996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.396 [2024-10-07 11:31:48.577029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.396 [2024-10-07 11:31:48.577047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.396 [2024-10-07 11:31:48.577061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.396 [2024-10-07 11:31:48.577107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.396 [2024-10-07 11:31:48.583771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.396 [2024-10-07 11:31:48.583894] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.396 [2024-10-07 11:31:48.583925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.396 [2024-10-07 11:31:48.583943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.396 [2024-10-07 11:31:48.584196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.396 [2024-10-07 11:31:48.584357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.396 [2024-10-07 11:31:48.584393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.396 [2024-10-07 11:31:48.584410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.396 [2024-10-07 11:31:48.584518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.396 [2024-10-07 11:31:48.587272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.396 [2024-10-07 11:31:48.587400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.396 [2024-10-07 11:31:48.587432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.396 [2024-10-07 11:31:48.587449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.396 [2024-10-07 11:31:48.587480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.396 [2024-10-07 11:31:48.587513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.396 [2024-10-07 11:31:48.587531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.396 [2024-10-07 11:31:48.587545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.396 [2024-10-07 11:31:48.587575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.396 [2024-10-07 11:31:48.593875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.396 [2024-10-07 11:31:48.593987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.396 [2024-10-07 11:31:48.594019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.396 [2024-10-07 11:31:48.594036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.396 [2024-10-07 11:31:48.594068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.396 [2024-10-07 11:31:48.594099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.396 [2024-10-07 11:31:48.594117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.396 [2024-10-07 11:31:48.594131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.396 [2024-10-07 11:31:48.594899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.396 [2024-10-07 11:31:48.597574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.396 [2024-10-07 11:31:48.597686] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.396 [2024-10-07 11:31:48.597717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.396 [2024-10-07 11:31:48.597756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.396 [2024-10-07 11:31:48.598011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.396 [2024-10-07 11:31:48.598170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.396 [2024-10-07 11:31:48.598206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.396 [2024-10-07 11:31:48.598223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.396 [2024-10-07 11:31:48.598365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.396 [2024-10-07 11:31:48.604421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.396 [2024-10-07 11:31:48.604536] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.396 [2024-10-07 11:31:48.604567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.396 [2024-10-07 11:31:48.604584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.396 [2024-10-07 11:31:48.604618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.396 [2024-10-07 11:31:48.604650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.396 [2024-10-07 11:31:48.604668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.396 [2024-10-07 11:31:48.604682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.396 [2024-10-07 11:31:48.604712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.396 [2024-10-07 11:31:48.607660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.396 [2024-10-07 11:31:48.607776] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.396 [2024-10-07 11:31:48.607807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.396 [2024-10-07 11:31:48.607824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.396 [2024-10-07 11:31:48.607856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.396 [2024-10-07 11:31:48.607888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.396 [2024-10-07 11:31:48.607906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.396 [2024-10-07 11:31:48.607920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.396 [2024-10-07 11:31:48.607950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.396 [2024-10-07 11:31:48.615170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.396 [2024-10-07 11:31:48.615284] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.396 [2024-10-07 11:31:48.615330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.396 [2024-10-07 11:31:48.615350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.396 [2024-10-07 11:31:48.615389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.396 [2024-10-07 11:31:48.615421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.396 [2024-10-07 11:31:48.615455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.396 [2024-10-07 11:31:48.615470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.396 [2024-10-07 11:31:48.615503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.396 [2024-10-07 11:31:48.618516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.396 [2024-10-07 11:31:48.618628] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.396 [2024-10-07 11:31:48.618659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.396 [2024-10-07 11:31:48.618677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.396 [2024-10-07 11:31:48.618708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.396 [2024-10-07 11:31:48.618741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.396 [2024-10-07 11:31:48.618759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.396 [2024-10-07 11:31:48.618773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.396 [2024-10-07 11:31:48.618803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.396 [2024-10-07 11:31:48.625556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.396 [2024-10-07 11:31:48.625667] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.396 [2024-10-07 11:31:48.625699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.396 [2024-10-07 11:31:48.625716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.396 [2024-10-07 11:31:48.625967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.396 [2024-10-07 11:31:48.626130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.397 [2024-10-07 11:31:48.626165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.397 [2024-10-07 11:31:48.626183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.397 [2024-10-07 11:31:48.626303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.397 [2024-10-07 11:31:48.629118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.397 [2024-10-07 11:31:48.629226] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.397 [2024-10-07 11:31:48.629258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.397 [2024-10-07 11:31:48.629275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.397 [2024-10-07 11:31:48.629307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.397 [2024-10-07 11:31:48.629355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.397 [2024-10-07 11:31:48.629374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.397 [2024-10-07 11:31:48.629388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.397 [2024-10-07 11:31:48.629418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.397 [2024-10-07 11:31:48.635644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.397 [2024-10-07 11:31:48.635757] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.397 [2024-10-07 11:31:48.635789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.397 [2024-10-07 11:31:48.635807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.397 [2024-10-07 11:31:48.635838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.397 [2024-10-07 11:31:48.635870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.397 [2024-10-07 11:31:48.635888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.397 [2024-10-07 11:31:48.635902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.397 [2024-10-07 11:31:48.635933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.397 [2024-10-07 11:31:48.639413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.397 [2024-10-07 11:31:48.639526] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.397 [2024-10-07 11:31:48.639557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.397 [2024-10-07 11:31:48.639574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.397 [2024-10-07 11:31:48.639826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.397 [2024-10-07 11:31:48.639972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.397 [2024-10-07 11:31:48.640008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.397 [2024-10-07 11:31:48.640025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.397 [2024-10-07 11:31:48.640138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.397 [2024-10-07 11:31:48.646226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.397 [2024-10-07 11:31:48.646362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.397 [2024-10-07 11:31:48.646394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.397 [2024-10-07 11:31:48.646412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.397 [2024-10-07 11:31:48.646445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.397 [2024-10-07 11:31:48.646477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.397 [2024-10-07 11:31:48.646500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.397 [2024-10-07 11:31:48.646514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.397 [2024-10-07 11:31:48.646544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.397 [2024-10-07 11:31:48.649503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.397 [2024-10-07 11:31:48.649612] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.397 [2024-10-07 11:31:48.649643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.397 [2024-10-07 11:31:48.649660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.397 [2024-10-07 11:31:48.649709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.397 [2024-10-07 11:31:48.649741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.397 [2024-10-07 11:31:48.649760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.397 [2024-10-07 11:31:48.649774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.397 [2024-10-07 11:31:48.649805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.397 [2024-10-07 11:31:48.656782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.397 [2024-10-07 11:31:48.656896] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.397 [2024-10-07 11:31:48.656927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.397 [2024-10-07 11:31:48.656945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.397 [2024-10-07 11:31:48.656977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.397 [2024-10-07 11:31:48.657009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.397 [2024-10-07 11:31:48.657027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.397 [2024-10-07 11:31:48.657042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.397 [2024-10-07 11:31:48.657072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.397 [2024-10-07 11:31:48.660096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.397 [2024-10-07 11:31:48.660207] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.397 [2024-10-07 11:31:48.660238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.397 [2024-10-07 11:31:48.660256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.397 [2024-10-07 11:31:48.660288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.397 [2024-10-07 11:31:48.660336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.397 [2024-10-07 11:31:48.660357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.397 [2024-10-07 11:31:48.660372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.397 [2024-10-07 11:31:48.660402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.397 [2024-10-07 11:31:48.667075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.397 [2024-10-07 11:31:48.667188] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.397 [2024-10-07 11:31:48.667220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.397 [2024-10-07 11:31:48.667237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.397 [2024-10-07 11:31:48.667508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.397 [2024-10-07 11:31:48.667702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.397 [2024-10-07 11:31:48.667734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.397 [2024-10-07 11:31:48.667767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.397 [2024-10-07 11:31:48.667877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.397 [2024-10-07 11:31:48.670588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.397 [2024-10-07 11:31:48.670699] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.397 [2024-10-07 11:31:48.670730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.397 [2024-10-07 11:31:48.670747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.397 [2024-10-07 11:31:48.670780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.397 [2024-10-07 11:31:48.670812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.397 [2024-10-07 11:31:48.670830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.397 [2024-10-07 11:31:48.670844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.397 [2024-10-07 11:31:48.670874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.397 [2024-10-07 11:31:48.677163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.397 [2024-10-07 11:31:48.677275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.397 [2024-10-07 11:31:48.677307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.397 [2024-10-07 11:31:48.677339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.397 [2024-10-07 11:31:48.677373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.397 [2024-10-07 11:31:48.677405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.397 [2024-10-07 11:31:48.677423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.397 [2024-10-07 11:31:48.677437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.397 [2024-10-07 11:31:48.678158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.397 [2024-10-07 11:31:48.680815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.397 [2024-10-07 11:31:48.680926] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.397 [2024-10-07 11:31:48.680957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.397 [2024-10-07 11:31:48.680974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.397 [2024-10-07 11:31:48.681225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.397 [2024-10-07 11:31:48.681389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.397 [2024-10-07 11:31:48.681446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.397 [2024-10-07 11:31:48.681465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.397 [2024-10-07 11:31:48.681575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.397 [2024-10-07 11:31:48.687610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.397 [2024-10-07 11:31:48.687746] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.397 [2024-10-07 11:31:48.687778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.397 [2024-10-07 11:31:48.687796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.397 [2024-10-07 11:31:48.687829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.397 [2024-10-07 11:31:48.687862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.397 [2024-10-07 11:31:48.687880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.397 [2024-10-07 11:31:48.687894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.397 [2024-10-07 11:31:48.687925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.397 [2024-10-07 11:31:48.690909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.397 [2024-10-07 11:31:48.691021] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.397 [2024-10-07 11:31:48.691052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.397 [2024-10-07 11:31:48.691069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.397 [2024-10-07 11:31:48.691101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.397 [2024-10-07 11:31:48.691134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.397 [2024-10-07 11:31:48.691152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.397 [2024-10-07 11:31:48.691166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.397 [2024-10-07 11:31:48.691905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.397 [2024-10-07 11:31:48.698009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.397 [2024-10-07 11:31:48.698122] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.397 [2024-10-07 11:31:48.698153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.397 [2024-10-07 11:31:48.698170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.397 [2024-10-07 11:31:48.698203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.397 [2024-10-07 11:31:48.698235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.397 [2024-10-07 11:31:48.698253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.397 [2024-10-07 11:31:48.698267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.397 [2024-10-07 11:31:48.698311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.397 [2024-10-07 11:31:48.701281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.397 [2024-10-07 11:31:48.701402] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.397 [2024-10-07 11:31:48.701434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.397 [2024-10-07 11:31:48.701451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.397 [2024-10-07 11:31:48.701483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.397 [2024-10-07 11:31:48.701531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.397 [2024-10-07 11:31:48.701551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.397 [2024-10-07 11:31:48.701565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.397 [2024-10-07 11:31:48.701595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.397 [2024-10-07 11:31:48.708187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.397 [2024-10-07 11:31:48.708301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.397 [2024-10-07 11:31:48.708347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.397 [2024-10-07 11:31:48.708366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.397 [2024-10-07 11:31:48.708619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.397 [2024-10-07 11:31:48.708813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.398 [2024-10-07 11:31:48.708849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.398 [2024-10-07 11:31:48.708866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.398 [2024-10-07 11:31:48.708974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.398 [2024-10-07 11:31:48.711695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.398 [2024-10-07 11:31:48.711806] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.398 [2024-10-07 11:31:48.711837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.398 [2024-10-07 11:31:48.711854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.398 [2024-10-07 11:31:48.711885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.398 [2024-10-07 11:31:48.711917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.398 [2024-10-07 11:31:48.711935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.398 [2024-10-07 11:31:48.711950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.398 [2024-10-07 11:31:48.711979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.398 [2024-10-07 11:31:48.718276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.398 [2024-10-07 11:31:48.718411] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.398 [2024-10-07 11:31:48.718444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.398 [2024-10-07 11:31:48.718461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.398 [2024-10-07 11:31:48.718493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.398 [2024-10-07 11:31:48.718525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.398 [2024-10-07 11:31:48.718543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.398 [2024-10-07 11:31:48.718558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.398 [2024-10-07 11:31:48.719298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.398 [2024-10-07 11:31:48.721917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.398 [2024-10-07 11:31:48.722028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.398 [2024-10-07 11:31:48.722059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.398 [2024-10-07 11:31:48.722076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.398 [2024-10-07 11:31:48.722374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.398 [2024-10-07 11:31:48.722526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.398 [2024-10-07 11:31:48.722561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.398 [2024-10-07 11:31:48.722579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.398 [2024-10-07 11:31:48.722687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.398 [2024-10-07 11:31:48.728764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.398 [2024-10-07 11:31:48.728891] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.398 [2024-10-07 11:31:48.728922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.398 [2024-10-07 11:31:48.728939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.398 [2024-10-07 11:31:48.728971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.398 [2024-10-07 11:31:48.729004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.398 [2024-10-07 11:31:48.729022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.398 [2024-10-07 11:31:48.729036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.398 [2024-10-07 11:31:48.729066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.398 [2024-10-07 11:31:48.732000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.398 [2024-10-07 11:31:48.732118] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.398 [2024-10-07 11:31:48.732149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.398 [2024-10-07 11:31:48.732166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.398 [2024-10-07 11:31:48.732197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.398 [2024-10-07 11:31:48.732229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.398 [2024-10-07 11:31:48.732247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.398 [2024-10-07 11:31:48.732261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.398 [2024-10-07 11:31:48.732292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.398 [2024-10-07 11:31:48.739235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.398 [2024-10-07 11:31:48.739367] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.398 [2024-10-07 11:31:48.739399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.398 [2024-10-07 11:31:48.739433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.398 [2024-10-07 11:31:48.739467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.398 [2024-10-07 11:31:48.739500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.398 [2024-10-07 11:31:48.739519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.398 [2024-10-07 11:31:48.739533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.398 [2024-10-07 11:31:48.739564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.398 [2024-10-07 11:31:48.742561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.398 [2024-10-07 11:31:48.742675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.398 [2024-10-07 11:31:48.742705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.398 [2024-10-07 11:31:48.742723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.398 [2024-10-07 11:31:48.742754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.398 [2024-10-07 11:31:48.742786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.398 [2024-10-07 11:31:48.742804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.398 [2024-10-07 11:31:48.742819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.398 [2024-10-07 11:31:48.742849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.398 [2024-10-07 11:31:48.749509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.398 [2024-10-07 11:31:48.749620] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.398 [2024-10-07 11:31:48.749652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.398 [2024-10-07 11:31:48.749669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.398 [2024-10-07 11:31:48.749920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.398 [2024-10-07 11:31:48.750073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.398 [2024-10-07 11:31:48.750111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.398 [2024-10-07 11:31:48.750129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.398 [2024-10-07 11:31:48.750238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.398 [2024-10-07 11:31:48.753046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.398 [2024-10-07 11:31:48.753158] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.398 [2024-10-07 11:31:48.753190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.398 [2024-10-07 11:31:48.753207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.398 [2024-10-07 11:31:48.753238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.398 [2024-10-07 11:31:48.753270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.398 [2024-10-07 11:31:48.753305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.398 [2024-10-07 11:31:48.753337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.398 [2024-10-07 11:31:48.753371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.398 [2024-10-07 11:31:48.759603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.398 [2024-10-07 11:31:48.759717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.398 [2024-10-07 11:31:48.759749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.398 [2024-10-07 11:31:48.759767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.398 [2024-10-07 11:31:48.759798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.398 [2024-10-07 11:31:48.759830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.398 [2024-10-07 11:31:48.759848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.398 [2024-10-07 11:31:48.759863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.398 [2024-10-07 11:31:48.759893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.398 [2024-10-07 11:31:48.763312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.398 [2024-10-07 11:31:48.763443] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.398 [2024-10-07 11:31:48.763475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.398 [2024-10-07 11:31:48.763492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.398 [2024-10-07 11:31:48.763744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.398 [2024-10-07 11:31:48.763891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.398 [2024-10-07 11:31:48.763926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.398 [2024-10-07 11:31:48.763943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.398 [2024-10-07 11:31:48.764050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.398 [2024-10-07 11:31:48.770123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.398 [2024-10-07 11:31:48.770235] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.398 [2024-10-07 11:31:48.770267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.398 [2024-10-07 11:31:48.770297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.398 [2024-10-07 11:31:48.770346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.398 [2024-10-07 11:31:48.770381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.398 [2024-10-07 11:31:48.770400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.398 [2024-10-07 11:31:48.770414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.398 [2024-10-07 11:31:48.770444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.398 [2024-10-07 11:31:48.773418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.398 [2024-10-07 11:31:48.773528] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.398 [2024-10-07 11:31:48.773559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.398 [2024-10-07 11:31:48.773576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.398 [2024-10-07 11:31:48.773607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.398 [2024-10-07 11:31:48.773640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.398 [2024-10-07 11:31:48.773658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.398 [2024-10-07 11:31:48.773672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.398 [2024-10-07 11:31:48.773702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.398 [2024-10-07 11:31:48.780975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.398 [2024-10-07 11:31:48.781085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.398 [2024-10-07 11:31:48.781117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.398 [2024-10-07 11:31:48.781134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.398 [2024-10-07 11:31:48.781165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.398 [2024-10-07 11:31:48.781197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.398 [2024-10-07 11:31:48.781215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.398 [2024-10-07 11:31:48.781230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.398 [2024-10-07 11:31:48.781260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.398 [2024-10-07 11:31:48.784359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.398 [2024-10-07 11:31:48.784470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.398 [2024-10-07 11:31:48.784501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.398 [2024-10-07 11:31:48.784519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.398 [2024-10-07 11:31:48.784550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.399 [2024-10-07 11:31:48.784582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.399 [2024-10-07 11:31:48.784601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.399 [2024-10-07 11:31:48.784615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.399 [2024-10-07 11:31:48.784645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.399 [2024-10-07 11:31:48.791461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.399 [2024-10-07 11:31:48.791583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.399 [2024-10-07 11:31:48.791614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.399 [2024-10-07 11:31:48.791631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.399 [2024-10-07 11:31:48.791902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.399 [2024-10-07 11:31:48.792062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.399 [2024-10-07 11:31:48.792097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.399 [2024-10-07 11:31:48.792114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.399 [2024-10-07 11:31:48.792227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.399 [2024-10-07 11:31:48.795054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.399 [2024-10-07 11:31:48.795165] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.399 [2024-10-07 11:31:48.795196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.399 [2024-10-07 11:31:48.795214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.399 [2024-10-07 11:31:48.795246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.399 [2024-10-07 11:31:48.795278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.399 [2024-10-07 11:31:48.795296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.399 [2024-10-07 11:31:48.795310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.399 [2024-10-07 11:31:48.795357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.399 [2024-10-07 11:31:48.801620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.399 [2024-10-07 11:31:48.801731] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.399 [2024-10-07 11:31:48.801762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.399 [2024-10-07 11:31:48.801780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.399 [2024-10-07 11:31:48.801811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.399 [2024-10-07 11:31:48.801843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.399 [2024-10-07 11:31:48.801862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.399 [2024-10-07 11:31:48.801876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.399 [2024-10-07 11:31:48.801907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.399 [2024-10-07 11:31:48.805491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.399 [2024-10-07 11:31:48.805605] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.399 [2024-10-07 11:31:48.805636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.399 [2024-10-07 11:31:48.805654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.399 [2024-10-07 11:31:48.805905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.399 [2024-10-07 11:31:48.806052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.399 [2024-10-07 11:31:48.806087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.399 [2024-10-07 11:31:48.806120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.399 [2024-10-07 11:31:48.806229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.399 [2024-10-07 11:31:48.812376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.399 [2024-10-07 11:31:48.812490] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.399 [2024-10-07 11:31:48.812521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.399 [2024-10-07 11:31:48.812539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.399 [2024-10-07 11:31:48.812571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.399 [2024-10-07 11:31:48.812603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.399 [2024-10-07 11:31:48.812621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.399 [2024-10-07 11:31:48.812635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.399 [2024-10-07 11:31:48.812665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.399 [2024-10-07 11:31:48.815582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.399 [2024-10-07 11:31:48.815692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.399 [2024-10-07 11:31:48.815724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.399 [2024-10-07 11:31:48.815741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.399 [2024-10-07 11:31:48.815773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.399 [2024-10-07 11:31:48.815805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.399 [2024-10-07 11:31:48.815823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.399 [2024-10-07 11:31:48.815837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.399 [2024-10-07 11:31:48.815867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.399 [2024-10-07 11:31:48.822972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.399 [2024-10-07 11:31:48.823087] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.399 [2024-10-07 11:31:48.823118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.399 [2024-10-07 11:31:48.823136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.399 [2024-10-07 11:31:48.823168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.399 [2024-10-07 11:31:48.823200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.399 [2024-10-07 11:31:48.823218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.399 [2024-10-07 11:31:48.823232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.399 [2024-10-07 11:31:48.823262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.399 [2024-10-07 11:31:48.826304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.399 [2024-10-07 11:31:48.826455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.399 [2024-10-07 11:31:48.826486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.399 [2024-10-07 11:31:48.826504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.399 [2024-10-07 11:31:48.826535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.399 [2024-10-07 11:31:48.826567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.399 [2024-10-07 11:31:48.826585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.399 [2024-10-07 11:31:48.826600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.399 [2024-10-07 11:31:48.826629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.399 [2024-10-07 11:31:48.833276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.399 [2024-10-07 11:31:48.833399] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.399 [2024-10-07 11:31:48.833432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.399 [2024-10-07 11:31:48.833449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.399 [2024-10-07 11:31:48.833701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.399 [2024-10-07 11:31:48.833847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.399 [2024-10-07 11:31:48.833872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.399 [2024-10-07 11:31:48.833887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.399 [2024-10-07 11:31:48.833993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.399 [2024-10-07 11:31:48.836898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.399 [2024-10-07 11:31:48.837008] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.399 [2024-10-07 11:31:48.837039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.399 [2024-10-07 11:31:48.837056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.399 [2024-10-07 11:31:48.837088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.399 [2024-10-07 11:31:48.837120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.399 [2024-10-07 11:31:48.837138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.399 [2024-10-07 11:31:48.837152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.399 [2024-10-07 11:31:48.837182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.399 [2024-10-07 11:31:48.843379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.399 [2024-10-07 11:31:48.843493] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.399 [2024-10-07 11:31:48.843525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.399 [2024-10-07 11:31:48.843542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.399 [2024-10-07 11:31:48.843573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.399 [2024-10-07 11:31:48.843625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.399 [2024-10-07 11:31:48.843644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.399 [2024-10-07 11:31:48.843658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.399 [2024-10-07 11:31:48.843689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.400 [2024-10-07 11:31:48.847103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.400 [2024-10-07 11:31:48.847215] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.400 [2024-10-07 11:31:48.847246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.400 [2024-10-07 11:31:48.847264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.400 [2024-10-07 11:31:48.847529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.400 [2024-10-07 11:31:48.847670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.400 [2024-10-07 11:31:48.847695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.400 [2024-10-07 11:31:48.847709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.400 [2024-10-07 11:31:48.847815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.400 [2024-10-07 11:31:48.853962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.400 [2024-10-07 11:31:48.854076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.400 [2024-10-07 11:31:48.854108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.400 [2024-10-07 11:31:48.854125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.400 [2024-10-07 11:31:48.854157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.400 [2024-10-07 11:31:48.854188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.400 [2024-10-07 11:31:48.854206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.400 [2024-10-07 11:31:48.854220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.400 [2024-10-07 11:31:48.854251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.400 [2024-10-07 11:31:48.857191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.400 [2024-10-07 11:31:48.857299] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.400 [2024-10-07 11:31:48.857343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.400 [2024-10-07 11:31:48.857362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.400 [2024-10-07 11:31:48.857394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.400 [2024-10-07 11:31:48.857426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.400 [2024-10-07 11:31:48.857444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.400 [2024-10-07 11:31:48.857458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.400 [2024-10-07 11:31:48.857507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.400 [2024-10-07 11:31:48.864419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.400 [2024-10-07 11:31:48.864535] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.400 [2024-10-07 11:31:48.864566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.400 [2024-10-07 11:31:48.864583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.400 [2024-10-07 11:31:48.864615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.400 [2024-10-07 11:31:48.864648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.400 [2024-10-07 11:31:48.864665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.400 [2024-10-07 11:31:48.864679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.400 [2024-10-07 11:31:48.864710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.400 [2024-10-07 11:31:48.867687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.400 [2024-10-07 11:31:48.867799] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.400 [2024-10-07 11:31:48.867830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.400 [2024-10-07 11:31:48.867847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.400 [2024-10-07 11:31:48.867879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.400 [2024-10-07 11:31:48.867911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.400 [2024-10-07 11:31:48.867930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.400 [2024-10-07 11:31:48.867944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.400 [2024-10-07 11:31:48.867974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.400 [2024-10-07 11:31:48.874552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.400 [2024-10-07 11:31:48.874664] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.400 [2024-10-07 11:31:48.874696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.400 [2024-10-07 11:31:48.874714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.400 [2024-10-07 11:31:48.874965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.400 [2024-10-07 11:31:48.875116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.400 [2024-10-07 11:31:48.875152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.400 [2024-10-07 11:31:48.875169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.400 [2024-10-07 11:31:48.875277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.400 [2024-10-07 11:31:48.878099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.400 [2024-10-07 11:31:48.878207] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.400 [2024-10-07 11:31:48.878238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.400 [2024-10-07 11:31:48.878273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.400 [2024-10-07 11:31:48.878332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.400 [2024-10-07 11:31:48.878370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.400 [2024-10-07 11:31:48.878388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.400 [2024-10-07 11:31:48.878402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.400 [2024-10-07 11:31:48.878433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.400 [2024-10-07 11:31:48.884643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.400 [2024-10-07 11:31:48.884754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.400 [2024-10-07 11:31:48.884785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.400 [2024-10-07 11:31:48.884803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.400 [2024-10-07 11:31:48.884835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.400 [2024-10-07 11:31:48.884867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.400 [2024-10-07 11:31:48.884884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.400 [2024-10-07 11:31:48.884899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.400 [2024-10-07 11:31:48.884931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.400 [2024-10-07 11:31:48.888303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.400 [2024-10-07 11:31:48.888427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.400 [2024-10-07 11:31:48.888457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.400 [2024-10-07 11:31:48.888475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.400 [2024-10-07 11:31:48.888726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.400 [2024-10-07 11:31:48.888859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.400 [2024-10-07 11:31:48.888893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.400 [2024-10-07 11:31:48.888911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.400 [2024-10-07 11:31:48.889017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.400 [2024-10-07 11:31:48.895180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.400 [2024-10-07 11:31:48.895293] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.400 [2024-10-07 11:31:48.895339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.400 [2024-10-07 11:31:48.895359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.400 [2024-10-07 11:31:48.895392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.400 [2024-10-07 11:31:48.895424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.400 [2024-10-07 11:31:48.895462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.400 [2024-10-07 11:31:48.895478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.400 [2024-10-07 11:31:48.895510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.401 [2024-10-07 11:31:48.898399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.401 [2024-10-07 11:31:48.898509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.401 [2024-10-07 11:31:48.898540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.401 [2024-10-07 11:31:48.898557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.401 [2024-10-07 11:31:48.898589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.401 [2024-10-07 11:31:48.898621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.401 [2024-10-07 11:31:48.898643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.401 [2024-10-07 11:31:48.898658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.401 [2024-10-07 11:31:48.898688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.401 [2024-10-07 11:31:48.905693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.401 [2024-10-07 11:31:48.905819] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.401 [2024-10-07 11:31:48.905851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.401 [2024-10-07 11:31:48.905869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.401 [2024-10-07 11:31:48.905901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.401 [2024-10-07 11:31:48.905932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.401 [2024-10-07 11:31:48.905950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.401 [2024-10-07 11:31:48.905964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.401 [2024-10-07 11:31:48.905995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.401 [2024-10-07 11:31:48.908953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.401 [2024-10-07 11:31:48.909064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.401 [2024-10-07 11:31:48.909096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.401 [2024-10-07 11:31:48.909113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.401 [2024-10-07 11:31:48.909145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.401 [2024-10-07 11:31:48.909177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.401 [2024-10-07 11:31:48.909194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.401 [2024-10-07 11:31:48.909209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.401 [2024-10-07 11:31:48.909239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.401 [2024-10-07 11:31:48.915879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.401 [2024-10-07 11:31:48.915994] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.401 [2024-10-07 11:31:48.916026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.401 [2024-10-07 11:31:48.916043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.401 [2024-10-07 11:31:48.916295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.401 [2024-10-07 11:31:48.916458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.401 [2024-10-07 11:31:48.916494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.401 [2024-10-07 11:31:48.916512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.401 [2024-10-07 11:31:48.916620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.401 [2024-10-07 11:31:48.919422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.401 [2024-10-07 11:31:48.919534] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.401 [2024-10-07 11:31:48.919565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.401 [2024-10-07 11:31:48.919583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.401 [2024-10-07 11:31:48.919614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.401 [2024-10-07 11:31:48.919646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.401 [2024-10-07 11:31:48.919665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.401 [2024-10-07 11:31:48.919679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.401 [2024-10-07 11:31:48.919708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.401 [2024-10-07 11:31:48.925968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.401 [2024-10-07 11:31:48.926080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.401 [2024-10-07 11:31:48.926111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.401 [2024-10-07 11:31:48.926128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.401 [2024-10-07 11:31:48.926159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.401 [2024-10-07 11:31:48.926192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.401 [2024-10-07 11:31:48.926209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.401 [2024-10-07 11:31:48.926223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.401 [2024-10-07 11:31:48.926976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.401 [2024-10-07 11:31:48.929603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.401 [2024-10-07 11:31:48.929711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.401 [2024-10-07 11:31:48.929741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.401 [2024-10-07 11:31:48.929759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.401 [2024-10-07 11:31:48.930045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.401 [2024-10-07 11:31:48.930193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.401 [2024-10-07 11:31:48.930219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.401 [2024-10-07 11:31:48.930234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.401 [2024-10-07 11:31:48.930387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.401 [2024-10-07 11:31:48.936428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.401 [2024-10-07 11:31:48.936542] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.401 [2024-10-07 11:31:48.936574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.401 [2024-10-07 11:31:48.936591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.401 [2024-10-07 11:31:48.936624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.401 [2024-10-07 11:31:48.936656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.401 [2024-10-07 11:31:48.936674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.401 [2024-10-07 11:31:48.936689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.401 [2024-10-07 11:31:48.936718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.401 [2024-10-07 11:31:48.939687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.401 [2024-10-07 11:31:48.939798] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.401 [2024-10-07 11:31:48.939829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.401 [2024-10-07 11:31:48.939847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.401 [2024-10-07 11:31:48.939879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.401 [2024-10-07 11:31:48.939911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.401 [2024-10-07 11:31:48.939929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.401 [2024-10-07 11:31:48.939943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.401 [2024-10-07 11:31:48.939973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.401 [2024-10-07 11:31:48.946874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.401 [2024-10-07 11:31:48.946987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.402 [2024-10-07 11:31:48.947018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.402 [2024-10-07 11:31:48.947036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.402 [2024-10-07 11:31:48.947069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.402 [2024-10-07 11:31:48.947101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.402 [2024-10-07 11:31:48.947119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.402 [2024-10-07 11:31:48.947154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.402 [2024-10-07 11:31:48.947188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.402 [2024-10-07 11:31:48.950148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.402 [2024-10-07 11:31:48.950258] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.402 [2024-10-07 11:31:48.950302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.402 [2024-10-07 11:31:48.950337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.402 [2024-10-07 11:31:48.950374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.402 [2024-10-07 11:31:48.950407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.402 [2024-10-07 11:31:48.950426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.402 [2024-10-07 11:31:48.950440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.402 [2024-10-07 11:31:48.950471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.402 [2024-10-07 11:31:48.957089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.402 [2024-10-07 11:31:48.957202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.402 [2024-10-07 11:31:48.957234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.402 [2024-10-07 11:31:48.957252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.402 [2024-10-07 11:31:48.957519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.402 [2024-10-07 11:31:48.957687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.402 [2024-10-07 11:31:48.957721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.402 [2024-10-07 11:31:48.957738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.402 [2024-10-07 11:31:48.957846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.402 [2024-10-07 11:31:48.960617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.402 [2024-10-07 11:31:48.960734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.402 [2024-10-07 11:31:48.960766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.402 [2024-10-07 11:31:48.960783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.402 [2024-10-07 11:31:48.960815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.402 [2024-10-07 11:31:48.960847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.402 [2024-10-07 11:31:48.960865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.402 [2024-10-07 11:31:48.960880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.402 [2024-10-07 11:31:48.960910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.402 [2024-10-07 11:31:48.967181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.402 [2024-10-07 11:31:48.967311] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.402 [2024-10-07 11:31:48.967356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.402 [2024-10-07 11:31:48.967374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.402 [2024-10-07 11:31:48.967407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.402 [2024-10-07 11:31:48.967440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.402 [2024-10-07 11:31:48.967458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.402 [2024-10-07 11:31:48.967473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.402 [2024-10-07 11:31:48.968195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.402 [2024-10-07 11:31:48.970854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.402 [2024-10-07 11:31:48.970965] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.402 [2024-10-07 11:31:48.970997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.402 [2024-10-07 11:31:48.971014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.402 [2024-10-07 11:31:48.971266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.402 [2024-10-07 11:31:48.971438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.402 [2024-10-07 11:31:48.971473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.402 [2024-10-07 11:31:48.971491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.402 [2024-10-07 11:31:48.971598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.402 [2024-10-07 11:31:48.977631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.402 [2024-10-07 11:31:48.977752] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.402 [2024-10-07 11:31:48.977784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.402 [2024-10-07 11:31:48.977801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.402 [2024-10-07 11:31:48.977833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.402 [2024-10-07 11:31:48.977866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.402 [2024-10-07 11:31:48.977883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.402 [2024-10-07 11:31:48.977897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.402 [2024-10-07 11:31:48.977928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.402 [2024-10-07 11:31:48.980939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.402 [2024-10-07 11:31:48.981054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.402 [2024-10-07 11:31:48.981084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.402 [2024-10-07 11:31:48.981102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.402 [2024-10-07 11:31:48.981151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.402 [2024-10-07 11:31:48.981184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.402 [2024-10-07 11:31:48.981203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.402 [2024-10-07 11:31:48.981217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.402 [2024-10-07 11:31:48.981963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.402 [2024-10-07 11:31:48.988074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.402 [2024-10-07 11:31:48.988185] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.402 [2024-10-07 11:31:48.988217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.402 [2024-10-07 11:31:48.988234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.402 [2024-10-07 11:31:48.988266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.402 [2024-10-07 11:31:48.988297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.402 [2024-10-07 11:31:48.988328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.402 [2024-10-07 11:31:48.988346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.402 [2024-10-07 11:31:48.988378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.402 [2024-10-07 11:31:48.991390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.402 [2024-10-07 11:31:48.991500] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.402 [2024-10-07 11:31:48.991531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.402 [2024-10-07 11:31:48.991548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.402 [2024-10-07 11:31:48.991580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.402 [2024-10-07 11:31:48.991611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.403 [2024-10-07 11:31:48.991630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.403 [2024-10-07 11:31:48.991644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.403 [2024-10-07 11:31:48.991675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.403 [2024-10-07 11:31:48.998311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.403 [2024-10-07 11:31:48.998441] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.403 [2024-10-07 11:31:48.998473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.403 [2024-10-07 11:31:48.998490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.403 [2024-10-07 11:31:48.998742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.403 [2024-10-07 11:31:48.998889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.403 [2024-10-07 11:31:48.998925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.403 [2024-10-07 11:31:48.998943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.403 [2024-10-07 11:31:48.999070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.403 [2024-10-07 11:31:49.001839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.403 [2024-10-07 11:31:49.001963] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.403 [2024-10-07 11:31:49.001994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.403 [2024-10-07 11:31:49.002011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.403 [2024-10-07 11:31:49.002043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.403 [2024-10-07 11:31:49.002075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.403 [2024-10-07 11:31:49.002094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.403 [2024-10-07 11:31:49.002108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.403 [2024-10-07 11:31:49.002138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.403 [2024-10-07 11:31:49.008420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.403 [2024-10-07 11:31:49.008539] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.403 [2024-10-07 11:31:49.008572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.403 [2024-10-07 11:31:49.008589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.403 [2024-10-07 11:31:49.008621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.403 [2024-10-07 11:31:49.009365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.403 [2024-10-07 11:31:49.009401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.403 [2024-10-07 11:31:49.009418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.403 [2024-10-07 11:31:49.009590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.403 [2024-10-07 11:31:49.012009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.403 [2024-10-07 11:31:49.012122] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.403 [2024-10-07 11:31:49.012154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.403 [2024-10-07 11:31:49.012171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.403 [2024-10-07 11:31:49.012442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.403 [2024-10-07 11:31:49.012610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.403 [2024-10-07 11:31:49.012636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.403 [2024-10-07 11:31:49.012651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.403 [2024-10-07 11:31:49.012757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.403 [2024-10-07 11:31:49.018791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.403 [2024-10-07 11:31:49.018909] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.403 [2024-10-07 11:31:49.018960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.403 [2024-10-07 11:31:49.018979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.403 [2024-10-07 11:31:49.019012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.403 [2024-10-07 11:31:49.019045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.403 [2024-10-07 11:31:49.019063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.403 [2024-10-07 11:31:49.019077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.403 [2024-10-07 11:31:49.019108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.403 [2024-10-07 11:31:49.022100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.403 [2024-10-07 11:31:49.022209] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.403 [2024-10-07 11:31:49.022240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.403 [2024-10-07 11:31:49.022257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.403 [2024-10-07 11:31:49.022300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.403 [2024-10-07 11:31:49.022352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.403 [2024-10-07 11:31:49.022372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.403 [2024-10-07 11:31:49.022387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.403 [2024-10-07 11:31:49.023109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.403 [2024-10-07 11:31:49.029249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.403 [2024-10-07 11:31:49.029375] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.403 [2024-10-07 11:31:49.029408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.403 [2024-10-07 11:31:49.029425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.403 [2024-10-07 11:31:49.029458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.403 [2024-10-07 11:31:49.029490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.403 [2024-10-07 11:31:49.029508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.403 [2024-10-07 11:31:49.029522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.403 [2024-10-07 11:31:49.029553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.403 [2024-10-07 11:31:49.032568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.403 [2024-10-07 11:31:49.032680] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.403 [2024-10-07 11:31:49.032711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.403 [2024-10-07 11:31:49.032729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.403 [2024-10-07 11:31:49.032760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.403 [2024-10-07 11:31:49.032810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.403 [2024-10-07 11:31:49.032831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.403 [2024-10-07 11:31:49.032845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.403 [2024-10-07 11:31:49.032876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.403 [2024-10-07 11:31:49.039491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.403 [2024-10-07 11:31:49.039607] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.403 [2024-10-07 11:31:49.039638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.403 [2024-10-07 11:31:49.039655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.404 [2024-10-07 11:31:49.039907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.404 [2024-10-07 11:31:49.040070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.404 [2024-10-07 11:31:49.040104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.404 [2024-10-07 11:31:49.040121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.404 [2024-10-07 11:31:49.040229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.404 [2024-10-07 11:31:49.043053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.404 [2024-10-07 11:31:49.043164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.404 [2024-10-07 11:31:49.043195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.404 [2024-10-07 11:31:49.043212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.404 [2024-10-07 11:31:49.043244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.404 [2024-10-07 11:31:49.043276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.404 [2024-10-07 11:31:49.043294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.404 [2024-10-07 11:31:49.043308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.404 [2024-10-07 11:31:49.043353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.404 [2024-10-07 11:31:49.049586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.404 [2024-10-07 11:31:49.049697] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.404 [2024-10-07 11:31:49.049728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.404 [2024-10-07 11:31:49.049746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.404 [2024-10-07 11:31:49.049778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.404 [2024-10-07 11:31:49.049810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.404 [2024-10-07 11:31:49.049827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.404 [2024-10-07 11:31:49.049841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.404 [2024-10-07 11:31:49.049872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.404 [2024-10-07 11:31:49.053267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.404 [2024-10-07 11:31:49.053391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.404 [2024-10-07 11:31:49.053422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.404 [2024-10-07 11:31:49.053440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.404 [2024-10-07 11:31:49.053691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.404 [2024-10-07 11:31:49.053841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.404 [2024-10-07 11:31:49.053874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.404 [2024-10-07 11:31:49.053891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.404 [2024-10-07 11:31:49.053998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.404 [2024-10-07 11:31:49.060112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.404 [2024-10-07 11:31:49.060226] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.404 [2024-10-07 11:31:49.060259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.404 [2024-10-07 11:31:49.060276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.404 [2024-10-07 11:31:49.060308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.404 [2024-10-07 11:31:49.060356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.404 [2024-10-07 11:31:49.060377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.404 [2024-10-07 11:31:49.060392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.404 [2024-10-07 11:31:49.060422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.404 [2024-10-07 11:31:49.063367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.404 [2024-10-07 11:31:49.063477] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.404 [2024-10-07 11:31:49.063508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.404 [2024-10-07 11:31:49.063525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.404 [2024-10-07 11:31:49.063557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.404 [2024-10-07 11:31:49.063588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.404 [2024-10-07 11:31:49.063606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.404 [2024-10-07 11:31:49.063621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.404 [2024-10-07 11:31:49.063650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.404 [2024-10-07 11:31:49.070651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.404 [2024-10-07 11:31:49.070767] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.404 [2024-10-07 11:31:49.070798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.404 [2024-10-07 11:31:49.070835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.404 [2024-10-07 11:31:49.070870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.404 [2024-10-07 11:31:49.070903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.404 [2024-10-07 11:31:49.070921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.404 [2024-10-07 11:31:49.070935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.404 [2024-10-07 11:31:49.070965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.404 [2024-10-07 11:31:49.073922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.404 [2024-10-07 11:31:49.074031] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.404 [2024-10-07 11:31:49.074062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.404 [2024-10-07 11:31:49.074079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.404 [2024-10-07 11:31:49.074110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.404 [2024-10-07 11:31:49.074142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.404 [2024-10-07 11:31:49.074160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.404 [2024-10-07 11:31:49.074175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.404 [2024-10-07 11:31:49.074204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.404 [2024-10-07 11:31:49.080822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.404 [2024-10-07 11:31:49.080937] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.404 [2024-10-07 11:31:49.080968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.404 [2024-10-07 11:31:49.080986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.404 [2024-10-07 11:31:49.081238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.404 [2024-10-07 11:31:49.081424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.404 [2024-10-07 11:31:49.081457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.404 [2024-10-07 11:31:49.081474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.404 [2024-10-07 11:31:49.081607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.404 [2024-10-07 11:31:49.084411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.404 [2024-10-07 11:31:49.084520] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.404 [2024-10-07 11:31:49.084551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.404 [2024-10-07 11:31:49.084568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.404 [2024-10-07 11:31:49.084600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.404 [2024-10-07 11:31:49.084632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.404 [2024-10-07 11:31:49.084650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.404 [2024-10-07 11:31:49.084681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.404 [2024-10-07 11:31:49.084714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.404 [2024-10-07 11:31:49.090922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.404 [2024-10-07 11:31:49.091035] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.404 [2024-10-07 11:31:49.091067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.404 [2024-10-07 11:31:49.091084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.404 [2024-10-07 11:31:49.091116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.404 [2024-10-07 11:31:49.091147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.404 [2024-10-07 11:31:49.091165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.404 [2024-10-07 11:31:49.091179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.404 [2024-10-07 11:31:49.091210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.404 [2024-10-07 11:31:49.094615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.404 [2024-10-07 11:31:49.094728] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.405 [2024-10-07 11:31:49.094763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.405 [2024-10-07 11:31:49.094781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.405 [2024-10-07 11:31:49.095048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.405 [2024-10-07 11:31:49.095191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.405 [2024-10-07 11:31:49.095224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.405 [2024-10-07 11:31:49.095241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.405 [2024-10-07 11:31:49.095395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.405 [2024-10-07 11:31:49.101484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.405 [2024-10-07 11:31:49.101603] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.405 [2024-10-07 11:31:49.101634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.405 [2024-10-07 11:31:49.101652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.405 [2024-10-07 11:31:49.101684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.405 [2024-10-07 11:31:49.101715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.405 [2024-10-07 11:31:49.101733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.405 [2024-10-07 11:31:49.101747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.405 [2024-10-07 11:31:49.101778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.405 [2024-10-07 11:31:49.104705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.405 [2024-10-07 11:31:49.104834] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.405 [2024-10-07 11:31:49.104866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.405 [2024-10-07 11:31:49.104884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.405 [2024-10-07 11:31:49.104916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.405 [2024-10-07 11:31:49.104948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.405 [2024-10-07 11:31:49.104966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.405 [2024-10-07 11:31:49.104980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.405 [2024-10-07 11:31:49.105009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.405 [2024-10-07 11:31:49.112005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.405 [2024-10-07 11:31:49.112117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.405 [2024-10-07 11:31:49.112148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.405 [2024-10-07 11:31:49.112166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.405 [2024-10-07 11:31:49.112198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.405 [2024-10-07 11:31:49.112230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.405 [2024-10-07 11:31:49.112248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.405 [2024-10-07 11:31:49.112262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.405 [2024-10-07 11:31:49.112292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.405 [2024-10-07 11:31:49.115289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.405 [2024-10-07 11:31:49.115421] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.405 [2024-10-07 11:31:49.115452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.405 [2024-10-07 11:31:49.115470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.405 [2024-10-07 11:31:49.115502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.405 [2024-10-07 11:31:49.115534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.405 [2024-10-07 11:31:49.115552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.405 [2024-10-07 11:31:49.115567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.405 [2024-10-07 11:31:49.115597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.405 [2024-10-07 11:31:49.122156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.405 [2024-10-07 11:31:49.122268] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.405 [2024-10-07 11:31:49.122313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.405 [2024-10-07 11:31:49.122349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.405 [2024-10-07 11:31:49.122620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.405 [2024-10-07 11:31:49.122755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.405 [2024-10-07 11:31:49.122789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.405 [2024-10-07 11:31:49.122806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.405 [2024-10-07 11:31:49.122914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.405 [2024-10-07 11:31:49.125767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.405 [2024-10-07 11:31:49.125876] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.405 [2024-10-07 11:31:49.125908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.405 [2024-10-07 11:31:49.125925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.405 [2024-10-07 11:31:49.125956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.405 [2024-10-07 11:31:49.125988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.405 [2024-10-07 11:31:49.126006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.405 [2024-10-07 11:31:49.126020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.405 [2024-10-07 11:31:49.126050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.405 [2024-10-07 11:31:49.132248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.405 [2024-10-07 11:31:49.132375] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.405 [2024-10-07 11:31:49.132407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.405 [2024-10-07 11:31:49.132425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.405 [2024-10-07 11:31:49.132457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.405 [2024-10-07 11:31:49.132489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.405 [2024-10-07 11:31:49.132507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.405 [2024-10-07 11:31:49.132522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.405 [2024-10-07 11:31:49.132552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.405 [2024-10-07 11:31:49.136021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.405 [2024-10-07 11:31:49.136132] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.405 [2024-10-07 11:31:49.136164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.405 [2024-10-07 11:31:49.136181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.405 [2024-10-07 11:31:49.136448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.405 [2024-10-07 11:31:49.136608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.405 [2024-10-07 11:31:49.136643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.405 [2024-10-07 11:31:49.136660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.405 [2024-10-07 11:31:49.136776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.405 [2024-10-07 11:31:49.142920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.405 [2024-10-07 11:31:49.143035] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.405 [2024-10-07 11:31:49.143067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.405 [2024-10-07 11:31:49.143084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.405 [2024-10-07 11:31:49.143116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.405 [2024-10-07 11:31:49.143148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.405 [2024-10-07 11:31:49.143166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.405 [2024-10-07 11:31:49.143179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.405 [2024-10-07 11:31:49.143210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.405 [2024-10-07 11:31:49.146106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.405 [2024-10-07 11:31:49.146214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.405 [2024-10-07 11:31:49.146244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.405 [2024-10-07 11:31:49.146263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.405 [2024-10-07 11:31:49.146309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.405 [2024-10-07 11:31:49.146361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.405 [2024-10-07 11:31:49.146385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.405 [2024-10-07 11:31:49.146399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.405 [2024-10-07 11:31:49.146429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.406 [2024-10-07 11:31:49.153432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.406 [2024-10-07 11:31:49.153544] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.406 [2024-10-07 11:31:49.153576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.406 [2024-10-07 11:31:49.153594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.406 [2024-10-07 11:31:49.153625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.406 [2024-10-07 11:31:49.153657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.406 [2024-10-07 11:31:49.153675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.406 [2024-10-07 11:31:49.153689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.406 [2024-10-07 11:31:49.153719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.406 [2024-10-07 11:31:49.156760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.406 [2024-10-07 11:31:49.156872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.406 [2024-10-07 11:31:49.156920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.406 [2024-10-07 11:31:49.156939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.406 [2024-10-07 11:31:49.156971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.406 [2024-10-07 11:31:49.157004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.406 [2024-10-07 11:31:49.157022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.406 [2024-10-07 11:31:49.157036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.406 [2024-10-07 11:31:49.157066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.406 [2024-10-07 11:31:49.163766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.406 [2024-10-07 11:31:49.163880] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.406 [2024-10-07 11:31:49.163911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.406 [2024-10-07 11:31:49.163929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.406 [2024-10-07 11:31:49.164180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.406 [2024-10-07 11:31:49.164341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.406 [2024-10-07 11:31:49.164377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.406 [2024-10-07 11:31:49.164395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.406 [2024-10-07 11:31:49.164503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.406 [2024-10-07 11:31:49.167280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.406 [2024-10-07 11:31:49.167401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.406 [2024-10-07 11:31:49.167433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.406 [2024-10-07 11:31:49.167450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.406 [2024-10-07 11:31:49.167482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.406 [2024-10-07 11:31:49.167514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.406 [2024-10-07 11:31:49.167532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.406 [2024-10-07 11:31:49.167546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.406 [2024-10-07 11:31:49.167576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.406 [2024-10-07 11:31:49.173859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.406 [2024-10-07 11:31:49.173978] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.406 [2024-10-07 11:31:49.174010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.406 [2024-10-07 11:31:49.174028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.406 [2024-10-07 11:31:49.174060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.406 [2024-10-07 11:31:49.174115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.406 [2024-10-07 11:31:49.174143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.406 [2024-10-07 11:31:49.174161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.406 [2024-10-07 11:31:49.174963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.406 [2024-10-07 11:31:49.177583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.406 [2024-10-07 11:31:49.177696] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.406 [2024-10-07 11:31:49.177728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.406 [2024-10-07 11:31:49.177745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.406 [2024-10-07 11:31:49.177997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.406 [2024-10-07 11:31:49.178144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.406 [2024-10-07 11:31:49.178180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.406 [2024-10-07 11:31:49.178197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.406 [2024-10-07 11:31:49.178335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.406 [2024-10-07 11:31:49.184476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.406 [2024-10-07 11:31:49.184589] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.406 [2024-10-07 11:31:49.184622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.406 [2024-10-07 11:31:49.184639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.406 [2024-10-07 11:31:49.184672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.406 [2024-10-07 11:31:49.184704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.406 [2024-10-07 11:31:49.184722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.406 [2024-10-07 11:31:49.184736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.406 [2024-10-07 11:31:49.184767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.406 [2024-10-07 11:31:49.187671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.406 [2024-10-07 11:31:49.187780] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.406 [2024-10-07 11:31:49.187810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.406 [2024-10-07 11:31:49.187827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.406 [2024-10-07 11:31:49.187858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.406 [2024-10-07 11:31:49.187889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.406 [2024-10-07 11:31:49.187908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.406 [2024-10-07 11:31:49.187929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.406 [2024-10-07 11:31:49.187958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.406 [2024-10-07 11:31:49.194967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.406 [2024-10-07 11:31:49.195080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.406 [2024-10-07 11:31:49.195112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.406 [2024-10-07 11:31:49.195129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.406 [2024-10-07 11:31:49.195161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.406 [2024-10-07 11:31:49.195193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.406 [2024-10-07 11:31:49.195212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.406 [2024-10-07 11:31:49.195226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.406 [2024-10-07 11:31:49.195256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.406 [2024-10-07 11:31:49.198268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.407 [2024-10-07 11:31:49.198403] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.407 [2024-10-07 11:31:49.198435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.407 [2024-10-07 11:31:49.198453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.407 [2024-10-07 11:31:49.198485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.407 [2024-10-07 11:31:49.198517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.407 [2024-10-07 11:31:49.198535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.407 [2024-10-07 11:31:49.198550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.407 [2024-10-07 11:31:49.198580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.407 [2024-10-07 11:31:49.205269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.407 [2024-10-07 11:31:49.205395] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.407 [2024-10-07 11:31:49.205427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.407 [2024-10-07 11:31:49.205444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.407 [2024-10-07 11:31:49.205711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.407 [2024-10-07 11:31:49.205862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.407 [2024-10-07 11:31:49.205897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.407 [2024-10-07 11:31:49.205914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.407 [2024-10-07 11:31:49.206021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.407 [2024-10-07 11:31:49.208826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.407 [2024-10-07 11:31:49.208939] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.407 [2024-10-07 11:31:49.208970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.407 [2024-10-07 11:31:49.209008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.407 [2024-10-07 11:31:49.209042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.407 [2024-10-07 11:31:49.209074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.407 [2024-10-07 11:31:49.209092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.407 [2024-10-07 11:31:49.209106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.407 [2024-10-07 11:31:49.209136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.407 [2024-10-07 11:31:49.215376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.407 [2024-10-07 11:31:49.215488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.407 [2024-10-07 11:31:49.215519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.407 [2024-10-07 11:31:49.215537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.407 [2024-10-07 11:31:49.215568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.407 [2024-10-07 11:31:49.215600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.407 [2024-10-07 11:31:49.215618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.407 [2024-10-07 11:31:49.215632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.407 [2024-10-07 11:31:49.215663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.407 [2024-10-07 11:31:49.219106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.407 [2024-10-07 11:31:49.219220] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.407 [2024-10-07 11:31:49.219251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.407 [2024-10-07 11:31:49.219269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.407 [2024-10-07 11:31:49.219300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.407 [2024-10-07 11:31:49.219568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.407 [2024-10-07 11:31:49.219595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.407 [2024-10-07 11:31:49.219610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.407 [2024-10-07 11:31:49.219748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.407 [2024-10-07 11:31:49.226163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.407 [2024-10-07 11:31:49.226276] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.407 [2024-10-07 11:31:49.226335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.407 [2024-10-07 11:31:49.226356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.407 [2024-10-07 11:31:49.226390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.407 [2024-10-07 11:31:49.226423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.407 [2024-10-07 11:31:49.226441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.407 [2024-10-07 11:31:49.226471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.407 [2024-10-07 11:31:49.226504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.407 [2024-10-07 11:31:49.229326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.407 [2024-10-07 11:31:49.229436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.407 [2024-10-07 11:31:49.229467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.407 [2024-10-07 11:31:49.229485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.407 [2024-10-07 11:31:49.229516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.407 [2024-10-07 11:31:49.229548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.407 [2024-10-07 11:31:49.229566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.407 [2024-10-07 11:31:49.229581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.407 [2024-10-07 11:31:49.229616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.407 [2024-10-07 11:31:49.236751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.407 [2024-10-07 11:31:49.236864] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.407 [2024-10-07 11:31:49.236896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.407 [2024-10-07 11:31:49.236913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.407 [2024-10-07 11:31:49.236945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.407 [2024-10-07 11:31:49.236977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.407 [2024-10-07 11:31:49.236995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.407 [2024-10-07 11:31:49.237009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.407 [2024-10-07 11:31:49.237039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.407 [2024-10-07 11:31:49.240063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.407 [2024-10-07 11:31:49.240179] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.407 [2024-10-07 11:31:49.240209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.407 [2024-10-07 11:31:49.240227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.407 [2024-10-07 11:31:49.240258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.408 [2024-10-07 11:31:49.240290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.408 [2024-10-07 11:31:49.240308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.408 [2024-10-07 11:31:49.240337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.408 [2024-10-07 11:31:49.240369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.408 [2024-10-07 11:31:49.247057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.408 [2024-10-07 11:31:49.247191] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.408 [2024-10-07 11:31:49.247222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.408 [2024-10-07 11:31:49.247240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.408 [2024-10-07 11:31:49.247523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.408 [2024-10-07 11:31:49.247674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.408 [2024-10-07 11:31:49.247709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.408 [2024-10-07 11:31:49.247727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.408 [2024-10-07 11:31:49.247834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.408 [2024-10-07 11:31:49.250632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.408 [2024-10-07 11:31:49.250743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.408 [2024-10-07 11:31:49.250774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.408 [2024-10-07 11:31:49.250792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.408 [2024-10-07 11:31:49.250824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.408 [2024-10-07 11:31:49.250856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.408 [2024-10-07 11:31:49.250874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.408 [2024-10-07 11:31:49.250888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.408 [2024-10-07 11:31:49.250918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.408 [2024-10-07 11:31:49.257184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.408 [2024-10-07 11:31:49.257351] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.408 [2024-10-07 11:31:49.257384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.408 [2024-10-07 11:31:49.257402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.408 [2024-10-07 11:31:49.257436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.408 [2024-10-07 11:31:49.257469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.408 [2024-10-07 11:31:49.257487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.408 [2024-10-07 11:31:49.257502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.408 [2024-10-07 11:31:49.257534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.408 [2024-10-07 11:31:49.261065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.408 [2024-10-07 11:31:49.261177] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.408 [2024-10-07 11:31:49.261208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.408 [2024-10-07 11:31:49.261225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.408 [2024-10-07 11:31:49.261523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.408 [2024-10-07 11:31:49.261660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.408 [2024-10-07 11:31:49.261694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.408 [2024-10-07 11:31:49.261712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.408 [2024-10-07 11:31:49.261821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.408 [2024-10-07 11:31:49.268028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.408 [2024-10-07 11:31:49.268141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.408 [2024-10-07 11:31:49.268173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.408 [2024-10-07 11:31:49.268190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.408 [2024-10-07 11:31:49.268222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.408 [2024-10-07 11:31:49.268254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.408 [2024-10-07 11:31:49.268272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.408 [2024-10-07 11:31:49.268286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.408 [2024-10-07 11:31:49.268330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.408 [2024-10-07 11:31:49.271179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.408 [2024-10-07 11:31:49.271292] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.408 [2024-10-07 11:31:49.271335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.408 [2024-10-07 11:31:49.271355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.408 [2024-10-07 11:31:49.271388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.408 [2024-10-07 11:31:49.271420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.408 [2024-10-07 11:31:49.271438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.408 [2024-10-07 11:31:49.271452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.408 [2024-10-07 11:31:49.271482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.408 [2024-10-07 11:31:49.278631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.408 [2024-10-07 11:31:49.278743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.408 [2024-10-07 11:31:49.278775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.408 [2024-10-07 11:31:49.278793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.408 [2024-10-07 11:31:49.278825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.408 [2024-10-07 11:31:49.278857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.408 [2024-10-07 11:31:49.278875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.408 [2024-10-07 11:31:49.278905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.408 [2024-10-07 11:31:49.278967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.408 [2024-10-07 11:31:49.281929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.408 [2024-10-07 11:31:49.282040] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.408 [2024-10-07 11:31:49.282071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.408 [2024-10-07 11:31:49.282089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.408 [2024-10-07 11:31:49.282122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.408 [2024-10-07 11:31:49.282153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.408 [2024-10-07 11:31:49.282172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.408 [2024-10-07 11:31:49.282186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.408 [2024-10-07 11:31:49.282216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.408 [2024-10-07 11:31:49.288930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.408 [2024-10-07 11:31:49.289080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.408 [2024-10-07 11:31:49.289124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.408 [2024-10-07 11:31:49.289144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.408 [2024-10-07 11:31:49.289437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.408 [2024-10-07 11:31:49.289588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.408 [2024-10-07 11:31:49.289621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.408 [2024-10-07 11:31:49.289639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.408 [2024-10-07 11:31:49.289750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.408 [2024-10-07 11:31:49.292631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.408 [2024-10-07 11:31:49.292754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.408 [2024-10-07 11:31:49.292801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.408 [2024-10-07 11:31:49.292831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.408 [2024-10-07 11:31:49.292875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.408 [2024-10-07 11:31:49.292920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.408 [2024-10-07 11:31:49.292938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.408 [2024-10-07 11:31:49.292953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.408 [2024-10-07 11:31:49.292983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.408 [2024-10-07 11:31:49.299114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.408 [2024-10-07 11:31:49.299229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.409 [2024-10-07 11:31:49.299288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.409 [2024-10-07 11:31:49.299308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.409 [2024-10-07 11:31:49.299360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.409 [2024-10-07 11:31:49.299394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.409 [2024-10-07 11:31:49.299412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.409 [2024-10-07 11:31:49.299427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.409 [2024-10-07 11:31:49.299459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.409 [2024-10-07 11:31:49.302940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.409 [2024-10-07 11:31:49.303053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.409 [2024-10-07 11:31:49.303084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.409 [2024-10-07 11:31:49.303101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.409 [2024-10-07 11:31:49.303387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.409 [2024-10-07 11:31:49.303540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.409 [2024-10-07 11:31:49.303565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.409 [2024-10-07 11:31:49.303579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.409 [2024-10-07 11:31:49.303686] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.409 [2024-10-07 11:31:49.309898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.409 [2024-10-07 11:31:49.310015] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.409 [2024-10-07 11:31:49.310046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.409 [2024-10-07 11:31:49.310064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.409 [2024-10-07 11:31:49.310096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.409 [2024-10-07 11:31:49.310127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.409 [2024-10-07 11:31:49.310145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.409 [2024-10-07 11:31:49.310159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.409 [2024-10-07 11:31:49.310189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.409 [2024-10-07 11:31:49.313055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.409 [2024-10-07 11:31:49.313166] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.409 [2024-10-07 11:31:49.313197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.409 [2024-10-07 11:31:49.313215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.409 [2024-10-07 11:31:49.313247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.409 [2024-10-07 11:31:49.313298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.409 [2024-10-07 11:31:49.313331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.409 [2024-10-07 11:31:49.313349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.409 [2024-10-07 11:31:49.313380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.409 [2024-10-07 11:31:49.320494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.409 [2024-10-07 11:31:49.320609] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.409 [2024-10-07 11:31:49.320642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.409 [2024-10-07 11:31:49.320659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.409 [2024-10-07 11:31:49.320691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.409 [2024-10-07 11:31:49.320723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.409 [2024-10-07 11:31:49.320741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.409 [2024-10-07 11:31:49.320755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.409 [2024-10-07 11:31:49.320785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.409 [2024-10-07 11:31:49.323820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.409 [2024-10-07 11:31:49.323933] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.409 [2024-10-07 11:31:49.323964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.409 [2024-10-07 11:31:49.323981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.409 [2024-10-07 11:31:49.324013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.409 [2024-10-07 11:31:49.324045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.409 [2024-10-07 11:31:49.324063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.409 [2024-10-07 11:31:49.324078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.409 [2024-10-07 11:31:49.324107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.409 [2024-10-07 11:31:49.330765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.409 [2024-10-07 11:31:49.330878] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.409 [2024-10-07 11:31:49.330910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.409 [2024-10-07 11:31:49.330928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.409 [2024-10-07 11:31:49.331180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.409 [2024-10-07 11:31:49.331341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.409 [2024-10-07 11:31:49.331383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.409 [2024-10-07 11:31:49.331399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.409 [2024-10-07 11:31:49.331508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.409 [2024-10-07 11:31:49.334436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.409 [2024-10-07 11:31:49.334550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.409 [2024-10-07 11:31:49.334582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.409 [2024-10-07 11:31:49.334599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.409 [2024-10-07 11:31:49.334631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.409 [2024-10-07 11:31:49.334663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.409 [2024-10-07 11:31:49.334681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.409 [2024-10-07 11:31:49.334695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.409 [2024-10-07 11:31:49.334725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.409 [2024-10-07 11:31:49.340896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.409 [2024-10-07 11:31:49.341022] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.409 [2024-10-07 11:31:49.341054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.409 [2024-10-07 11:31:49.341071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.409 [2024-10-07 11:31:49.341103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.409 [2024-10-07 11:31:49.341136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.409 [2024-10-07 11:31:49.341153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.409 [2024-10-07 11:31:49.341168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.409 [2024-10-07 11:31:49.341199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.409 [2024-10-07 11:31:49.344785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.409 [2024-10-07 11:31:49.344907] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.409 [2024-10-07 11:31:49.344938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.409 [2024-10-07 11:31:49.344956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.409 [2024-10-07 11:31:49.345210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.409 [2024-10-07 11:31:49.345370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.409 [2024-10-07 11:31:49.345401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.409 [2024-10-07 11:31:49.345418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.409 [2024-10-07 11:31:49.345527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.409 [2024-10-07 11:31:49.351725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.409 [2024-10-07 11:31:49.351842] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.409 [2024-10-07 11:31:49.351874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.409 [2024-10-07 11:31:49.351918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.409 [2024-10-07 11:31:49.351952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.409 [2024-10-07 11:31:49.351984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.409 [2024-10-07 11:31:49.352002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.409 [2024-10-07 11:31:49.352016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.410 [2024-10-07 11:31:49.352047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.410 [2024-10-07 11:31:49.356854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.410 [2024-10-07 11:31:49.357156] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.410 [2024-10-07 11:31:49.357225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.410 [2024-10-07 11:31:49.357262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.410 [2024-10-07 11:31:49.358713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.410 [2024-10-07 11:31:49.359683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.410 [2024-10-07 11:31:49.359735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.410 [2024-10-07 11:31:49.359754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.410 [2024-10-07 11:31:49.359935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.410 [2024-10-07 11:31:49.362402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.410 [2024-10-07 11:31:49.362518] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.410 [2024-10-07 11:31:49.362550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.410 [2024-10-07 11:31:49.362568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.410 [2024-10-07 11:31:49.362600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.410 [2024-10-07 11:31:49.362632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.410 [2024-10-07 11:31:49.362650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.410 [2024-10-07 11:31:49.362665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.410 [2024-10-07 11:31:49.362696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.410 [2024-10-07 11:31:49.369211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.410 [2024-10-07 11:31:49.370362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.410 [2024-10-07 11:31:49.370437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.410 [2024-10-07 11:31:49.370475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.410 [2024-10-07 11:31:49.370707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.410 [2024-10-07 11:31:49.372490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.410 [2024-10-07 11:31:49.372586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.410 [2024-10-07 11:31:49.372619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.410 [2024-10-07 11:31:49.373872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.410 [2024-10-07 11:31:49.374140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.410 [2024-10-07 11:31:49.374423] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.410 [2024-10-07 11:31:49.374482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.410 [2024-10-07 11:31:49.374524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.410 [2024-10-07 11:31:49.375984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.410 [2024-10-07 11:31:49.376819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.410 [2024-10-07 11:31:49.376861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.410 [2024-10-07 11:31:49.376886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.410 [2024-10-07 11:31:49.377002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.410 [2024-10-07 11:31:49.379927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.410 [2024-10-07 11:31:49.380046] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.410 [2024-10-07 11:31:49.380078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.410 [2024-10-07 11:31:49.380096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.410 [2024-10-07 11:31:49.380128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.410 [2024-10-07 11:31:49.380160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.410 [2024-10-07 11:31:49.380179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.410 [2024-10-07 11:31:49.380193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.410 [2024-10-07 11:31:49.380223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.410 [2024-10-07 11:31:49.384516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.410 [2024-10-07 11:31:49.384633] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.410 [2024-10-07 11:31:49.384665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.410 [2024-10-07 11:31:49.384683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.410 [2024-10-07 11:31:49.384716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.410 [2024-10-07 11:31:49.384748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.410 [2024-10-07 11:31:49.384766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.410 [2024-10-07 11:31:49.384780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.410 [2024-10-07 11:31:49.384810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.410 [2024-10-07 11:31:49.390644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.410 [2024-10-07 11:31:49.390788] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.410 [2024-10-07 11:31:49.390821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.410 [2024-10-07 11:31:49.390839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.410 [2024-10-07 11:31:49.390871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.410 [2024-10-07 11:31:49.390903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.410 [2024-10-07 11:31:49.390921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.410 [2024-10-07 11:31:49.390936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.410 [2024-10-07 11:31:49.390965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.410 [2024-10-07 11:31:49.394611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.410 [2024-10-07 11:31:49.394725] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.410 [2024-10-07 11:31:49.394757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.410 [2024-10-07 11:31:49.394774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.410 [2024-10-07 11:31:49.394806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.410 [2024-10-07 11:31:49.394845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.410 [2024-10-07 11:31:49.394863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.410 [2024-10-07 11:31:49.394877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.410 [2024-10-07 11:31:49.394907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.410 [2024-10-07 11:31:49.401020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.410 [2024-10-07 11:31:49.401136] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.410 [2024-10-07 11:31:49.401169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.410 [2024-10-07 11:31:49.401186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.410 [2024-10-07 11:31:49.401453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.410 [2024-10-07 11:31:49.401603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.410 [2024-10-07 11:31:49.401638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.410 [2024-10-07 11:31:49.401656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.410 [2024-10-07 11:31:49.401763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.410 [2024-10-07 11:31:49.404701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.410 [2024-10-07 11:31:49.404813] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.410 [2024-10-07 11:31:49.404844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.410 [2024-10-07 11:31:49.404862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.410 [2024-10-07 11:31:49.404912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.410 [2024-10-07 11:31:49.404946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.410 [2024-10-07 11:31:49.404965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.410 [2024-10-07 11:31:49.404979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.410 [2024-10-07 11:31:49.405010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.410 [2024-10-07 11:31:49.411127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.410 [2024-10-07 11:31:49.411245] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.410 [2024-10-07 11:31:49.411278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.410 [2024-10-07 11:31:49.411295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.410 [2024-10-07 11:31:49.411341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.410 [2024-10-07 11:31:49.411376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.410 [2024-10-07 11:31:49.411395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.411 [2024-10-07 11:31:49.411409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.411 [2024-10-07 11:31:49.411440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.411 [2024-10-07 11:31:49.415113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.411 [2024-10-07 11:31:49.415229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.411 [2024-10-07 11:31:49.415261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.411 [2024-10-07 11:31:49.415278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.411 [2024-10-07 11:31:49.415544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.411 [2024-10-07 11:31:49.415705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.411 [2024-10-07 11:31:49.415733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.411 [2024-10-07 11:31:49.415748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.411 [2024-10-07 11:31:49.415855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.411 [2024-10-07 11:31:49.422147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.411 [2024-10-07 11:31:49.422262] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.411 [2024-10-07 11:31:49.422309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.411 [2024-10-07 11:31:49.422368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.411 [2024-10-07 11:31:49.422404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.411 [2024-10-07 11:31:49.422437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.411 [2024-10-07 11:31:49.422455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.411 [2024-10-07 11:31:49.422499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.411 [2024-10-07 11:31:49.422533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.411 [2024-10-07 11:31:49.425368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.411 [2024-10-07 11:31:49.425472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.411 [2024-10-07 11:31:49.425503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.411 [2024-10-07 11:31:49.425521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.411 [2024-10-07 11:31:49.425553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.411 [2024-10-07 11:31:49.425585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.411 [2024-10-07 11:31:49.425603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.411 [2024-10-07 11:31:49.425618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.411 [2024-10-07 11:31:49.425647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.411 [2024-10-07 11:31:49.432932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.411 [2024-10-07 11:31:49.433077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.411 [2024-10-07 11:31:49.433109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.411 [2024-10-07 11:31:49.433127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.411 [2024-10-07 11:31:49.433160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.411 [2024-10-07 11:31:49.433194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.411 [2024-10-07 11:31:49.433213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.411 [2024-10-07 11:31:49.433228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.411 [2024-10-07 11:31:49.433258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.411 [2024-10-07 11:31:49.436309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.411 [2024-10-07 11:31:49.436441] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.411 [2024-10-07 11:31:49.436473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.411 [2024-10-07 11:31:49.436491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.411 [2024-10-07 11:31:49.436523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.411 [2024-10-07 11:31:49.436555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.411 [2024-10-07 11:31:49.436574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.411 [2024-10-07 11:31:49.436589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.411 [2024-10-07 11:31:49.436620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.411 [2024-10-07 11:31:49.443349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.411 [2024-10-07 11:31:49.443456] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.411 [2024-10-07 11:31:49.443510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.411 [2024-10-07 11:31:49.443529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.411 [2024-10-07 11:31:49.443782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.411 [2024-10-07 11:31:49.443944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.411 [2024-10-07 11:31:49.443976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.411 [2024-10-07 11:31:49.443993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.411 [2024-10-07 11:31:49.444100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.411 [2024-10-07 11:31:49.447001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.411 [2024-10-07 11:31:49.447119] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.411 [2024-10-07 11:31:49.447151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.411 [2024-10-07 11:31:49.447169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.411 [2024-10-07 11:31:49.447200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.411 [2024-10-07 11:31:49.447233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.411 [2024-10-07 11:31:49.447251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.411 [2024-10-07 11:31:49.447265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.411 [2024-10-07 11:31:49.447296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.411 [2024-10-07 11:31:49.453545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.411 [2024-10-07 11:31:49.453660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.411 [2024-10-07 11:31:49.453691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.411 [2024-10-07 11:31:49.453709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.411 [2024-10-07 11:31:49.453741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.411 [2024-10-07 11:31:49.453773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.411 [2024-10-07 11:31:49.453791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.411 [2024-10-07 11:31:49.453806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.411 [2024-10-07 11:31:49.453837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.411 [2024-10-07 11:31:49.457477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.411 [2024-10-07 11:31:49.457593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.411 [2024-10-07 11:31:49.457624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.411 [2024-10-07 11:31:49.457642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.411 [2024-10-07 11:31:49.457897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.411 [2024-10-07 11:31:49.458067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.411 [2024-10-07 11:31:49.458093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.411 [2024-10-07 11:31:49.458108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.411 [2024-10-07 11:31:49.458216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.411 [2024-10-07 11:31:49.464612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.411 [2024-10-07 11:31:49.464746] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.411 [2024-10-07 11:31:49.464779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.412 [2024-10-07 11:31:49.464797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.412 [2024-10-07 11:31:49.464830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.412 [2024-10-07 11:31:49.464862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.412 [2024-10-07 11:31:49.464880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.412 [2024-10-07 11:31:49.464895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.412 [2024-10-07 11:31:49.464925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.412 [2024-10-07 11:31:49.467810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.412 [2024-10-07 11:31:49.467925] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.412 [2024-10-07 11:31:49.467964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.412 [2024-10-07 11:31:49.467981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.412 [2024-10-07 11:31:49.468014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.412 [2024-10-07 11:31:49.468046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.412 [2024-10-07 11:31:49.468064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.412 [2024-10-07 11:31:49.468079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.412 [2024-10-07 11:31:49.468109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.412 [2024-10-07 11:31:49.475280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.412 [2024-10-07 11:31:49.475407] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.412 [2024-10-07 11:31:49.475439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.412 [2024-10-07 11:31:49.475457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.412 [2024-10-07 11:31:49.475489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.412 [2024-10-07 11:31:49.475521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.412 [2024-10-07 11:31:49.475540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.412 [2024-10-07 11:31:49.475554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.412 [2024-10-07 11:31:49.475584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.412 [2024-10-07 11:31:49.478762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.412 [2024-10-07 11:31:49.478879] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.412 [2024-10-07 11:31:49.478914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.412 [2024-10-07 11:31:49.478931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.412 [2024-10-07 11:31:49.478964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.412 [2024-10-07 11:31:49.478997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.412 [2024-10-07 11:31:49.479014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.412 [2024-10-07 11:31:49.479028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.412 [2024-10-07 11:31:49.479058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.412 [2024-10-07 11:31:49.485787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.412 [2024-10-07 11:31:49.485902] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.412 [2024-10-07 11:31:49.485944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.412 [2024-10-07 11:31:49.485961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.412 [2024-10-07 11:31:49.486213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.412 [2024-10-07 11:31:49.486386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.412 [2024-10-07 11:31:49.486413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.412 [2024-10-07 11:31:49.486429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.412 [2024-10-07 11:31:49.486545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.412 [2024-10-07 11:31:49.489448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.412 [2024-10-07 11:31:49.489561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.412 [2024-10-07 11:31:49.489592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.412 [2024-10-07 11:31:49.489609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.412 [2024-10-07 11:31:49.489641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.412 [2024-10-07 11:31:49.489673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.412 [2024-10-07 11:31:49.489691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.412 [2024-10-07 11:31:49.489706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.412 [2024-10-07 11:31:49.489736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.412 [2024-10-07 11:31:49.495964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.412 [2024-10-07 11:31:49.496076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.412 [2024-10-07 11:31:49.496108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.412 [2024-10-07 11:31:49.496143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.412 [2024-10-07 11:31:49.496177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.412 [2024-10-07 11:31:49.496209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.412 [2024-10-07 11:31:49.496228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.412 [2024-10-07 11:31:49.496243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.412 [2024-10-07 11:31:49.496273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.412 8945.31 IOPS, 34.94 MiB/s [2024-10-07T11:31:52.935Z] [2024-10-07 11:31:49.503832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.412 [2024-10-07 11:31:49.504029] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.412 [2024-10-07 11:31:49.504068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.412 [2024-10-07 11:31:49.504088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.412 [2024-10-07 11:31:49.504121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.412 [2024-10-07 11:31:49.504153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.412 [2024-10-07 11:31:49.504172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.412 [2024-10-07 11:31:49.504192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.412 [2024-10-07 11:31:49.504224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.412 [2024-10-07 11:31:49.506930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.412 [2024-10-07 11:31:49.507042] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.412 [2024-10-07 11:31:49.507073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.412 [2024-10-07 11:31:49.507090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.412 [2024-10-07 11:31:49.507122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.412 [2024-10-07 11:31:49.507153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.412 [2024-10-07 11:31:49.507172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.412 [2024-10-07 11:31:49.507186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.412 [2024-10-07 11:31:49.507215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.412 [2024-10-07 11:31:49.514173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.412 [2024-10-07 11:31:49.514370] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.412 [2024-10-07 11:31:49.514405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.412 [2024-10-07 11:31:49.514423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.412 [2024-10-07 11:31:49.514686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.412 [2024-10-07 11:31:49.514854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.412 [2024-10-07 11:31:49.514916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.412 [2024-10-07 11:31:49.514935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.412 [2024-10-07 11:31:49.515054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.412 [2024-10-07 11:31:49.517815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.412 [2024-10-07 11:31:49.517926] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.412 [2024-10-07 11:31:49.517960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.412 [2024-10-07 11:31:49.517977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.412 [2024-10-07 11:31:49.518024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.412 [2024-10-07 11:31:49.518060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.412 [2024-10-07 11:31:49.518078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.412 [2024-10-07 11:31:49.518092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.412 [2024-10-07 11:31:49.518139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.413 [2024-10-07 11:31:49.524524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.413 [2024-10-07 11:31:49.524647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.413 [2024-10-07 11:31:49.524680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.413 [2024-10-07 11:31:49.524698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.413 [2024-10-07 11:31:49.524730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.413 [2024-10-07 11:31:49.524763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.413 [2024-10-07 11:31:49.524780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.413 [2024-10-07 11:31:49.524795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.413 [2024-10-07 11:31:49.524826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.413 [2024-10-07 11:31:49.528512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.413 [2024-10-07 11:31:49.528627] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.413 [2024-10-07 11:31:49.528658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.413 [2024-10-07 11:31:49.528676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.413 [2024-10-07 11:31:49.528927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.413 [2024-10-07 11:31:49.529080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.413 [2024-10-07 11:31:49.529126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.413 [2024-10-07 11:31:49.529145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.413 [2024-10-07 11:31:49.529257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.413 [2024-10-07 11:31:49.535515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.413 [2024-10-07 11:31:49.535651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.413 [2024-10-07 11:31:49.535683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.413 [2024-10-07 11:31:49.535700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.413 [2024-10-07 11:31:49.535732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.413 [2024-10-07 11:31:49.535764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.413 [2024-10-07 11:31:49.535782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.413 [2024-10-07 11:31:49.535797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.413 [2024-10-07 11:31:49.535828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.413 [2024-10-07 11:31:49.538654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.413 [2024-10-07 11:31:49.538766] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.413 [2024-10-07 11:31:49.538797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.413 [2024-10-07 11:31:49.538815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.413 [2024-10-07 11:31:49.538847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.413 [2024-10-07 11:31:49.538879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.413 [2024-10-07 11:31:49.538897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.413 [2024-10-07 11:31:49.538912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.413 [2024-10-07 11:31:49.538941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.413 [2024-10-07 11:31:49.546135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.413 [2024-10-07 11:31:49.546274] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.413 [2024-10-07 11:31:49.546334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.413 [2024-10-07 11:31:49.546356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.413 [2024-10-07 11:31:49.546391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.413 [2024-10-07 11:31:49.546425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.413 [2024-10-07 11:31:49.546443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.413 [2024-10-07 11:31:49.546458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.413 [2024-10-07 11:31:49.546489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.413 [2024-10-07 11:31:49.549590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.413 [2024-10-07 11:31:49.549704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.413 [2024-10-07 11:31:49.549736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.413 [2024-10-07 11:31:49.549754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.413 [2024-10-07 11:31:49.549809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.413 [2024-10-07 11:31:49.549842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.413 [2024-10-07 11:31:49.549861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.413 [2024-10-07 11:31:49.549875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.413 [2024-10-07 11:31:49.549919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.413 [2024-10-07 11:31:49.556650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.413 [2024-10-07 11:31:49.556785] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.413 [2024-10-07 11:31:49.556818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.413 [2024-10-07 11:31:49.556836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.413 [2024-10-07 11:31:49.557088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.413 [2024-10-07 11:31:49.557247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.413 [2024-10-07 11:31:49.557282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.413 [2024-10-07 11:31:49.557305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.413 [2024-10-07 11:31:49.557430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.413 [2024-10-07 11:31:49.560257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.413 [2024-10-07 11:31:49.560385] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.413 [2024-10-07 11:31:49.560417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.413 [2024-10-07 11:31:49.560434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.413 [2024-10-07 11:31:49.560466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.413 [2024-10-07 11:31:49.560499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.413 [2024-10-07 11:31:49.560517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.413 [2024-10-07 11:31:49.560531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.413 [2024-10-07 11:31:49.560561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.413 [2024-10-07 11:31:49.566744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.413 [2024-10-07 11:31:49.566857] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.413 [2024-10-07 11:31:49.566889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.413 [2024-10-07 11:31:49.566907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.413 [2024-10-07 11:31:49.566939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.413 [2024-10-07 11:31:49.566970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.413 [2024-10-07 11:31:49.566989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.413 [2024-10-07 11:31:49.567019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.413 [2024-10-07 11:31:49.567053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.413 [2024-10-07 11:31:49.570633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.413 [2024-10-07 11:31:49.570752] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.413 [2024-10-07 11:31:49.570785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.413 [2024-10-07 11:31:49.570802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.413 [2024-10-07 11:31:49.571054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.413 [2024-10-07 11:31:49.571208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.413 [2024-10-07 11:31:49.571245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.413 [2024-10-07 11:31:49.571263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.413 [2024-10-07 11:31:49.571384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.413 [2024-10-07 11:31:49.577532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.413 [2024-10-07 11:31:49.577647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.413 [2024-10-07 11:31:49.577679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.413 [2024-10-07 11:31:49.577696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.413 [2024-10-07 11:31:49.577728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.413 [2024-10-07 11:31:49.577760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.413 [2024-10-07 11:31:49.577778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.413 [2024-10-07 11:31:49.577793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.414 [2024-10-07 11:31:49.577824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.414 [2024-10-07 11:31:49.580727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.414 [2024-10-07 11:31:49.580839] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.414 [2024-10-07 11:31:49.580870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.414 [2024-10-07 11:31:49.580888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.414 [2024-10-07 11:31:49.580920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.414 [2024-10-07 11:31:49.580952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.414 [2024-10-07 11:31:49.580971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.414 [2024-10-07 11:31:49.580987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.414 [2024-10-07 11:31:49.581017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.414 [2024-10-07 11:31:49.588104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.414 [2024-10-07 11:31:49.588219] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.414 [2024-10-07 11:31:49.588267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.414 [2024-10-07 11:31:49.588286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.414 [2024-10-07 11:31:49.588334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.414 [2024-10-07 11:31:49.588369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.414 [2024-10-07 11:31:49.588388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.414 [2024-10-07 11:31:49.588402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.414 [2024-10-07 11:31:49.588433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.414 [2024-10-07 11:31:49.591494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.414 [2024-10-07 11:31:49.591605] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.414 [2024-10-07 11:31:49.591637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.414 [2024-10-07 11:31:49.591654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.414 [2024-10-07 11:31:49.591687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.414 [2024-10-07 11:31:49.591719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.414 [2024-10-07 11:31:49.591737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.414 [2024-10-07 11:31:49.591752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.414 [2024-10-07 11:31:49.591781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.414 [2024-10-07 11:31:49.598549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.414 [2024-10-07 11:31:49.598690] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.414 [2024-10-07 11:31:49.598723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.414 [2024-10-07 11:31:49.598742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.414 [2024-10-07 11:31:49.598999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.414 [2024-10-07 11:31:49.599154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.414 [2024-10-07 11:31:49.599190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.414 [2024-10-07 11:31:49.599209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.414 [2024-10-07 11:31:49.599341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.414 [2024-10-07 11:31:49.602200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.414 [2024-10-07 11:31:49.602339] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.414 [2024-10-07 11:31:49.602372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.414 [2024-10-07 11:31:49.602398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.414 [2024-10-07 11:31:49.602432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.414 [2024-10-07 11:31:49.602487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.414 [2024-10-07 11:31:49.602507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.414 [2024-10-07 11:31:49.602522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.414 [2024-10-07 11:31:49.602552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.414 [2024-10-07 11:31:49.608711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.414 [2024-10-07 11:31:49.608829] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.414 [2024-10-07 11:31:49.608861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.414 [2024-10-07 11:31:49.608878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.414 [2024-10-07 11:31:49.608911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.414 [2024-10-07 11:31:49.608943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.414 [2024-10-07 11:31:49.608961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.414 [2024-10-07 11:31:49.608976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.414 [2024-10-07 11:31:49.609007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.414 [2024-10-07 11:31:49.612641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.414 [2024-10-07 11:31:49.612746] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.414 [2024-10-07 11:31:49.612777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.414 [2024-10-07 11:31:49.612795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.414 [2024-10-07 11:31:49.613046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.414 [2024-10-07 11:31:49.613200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.414 [2024-10-07 11:31:49.613226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.414 [2024-10-07 11:31:49.613241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.414 [2024-10-07 11:31:49.613362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.414 [2024-10-07 11:31:49.619527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.414 [2024-10-07 11:31:49.619642] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.414 [2024-10-07 11:31:49.619680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.414 [2024-10-07 11:31:49.619697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.414 [2024-10-07 11:31:49.619729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.414 [2024-10-07 11:31:49.619761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.414 [2024-10-07 11:31:49.619779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.414 [2024-10-07 11:31:49.619793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.414 [2024-10-07 11:31:49.619844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.414 [2024-10-07 11:31:49.622726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.414 [2024-10-07 11:31:49.622837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.414 [2024-10-07 11:31:49.622869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.414 [2024-10-07 11:31:49.622887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.414 [2024-10-07 11:31:49.622919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.414 [2024-10-07 11:31:49.622952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.414 [2024-10-07 11:31:49.622970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.414 [2024-10-07 11:31:49.622985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.414 [2024-10-07 11:31:49.623015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.414 [2024-10-07 11:31:49.630182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.414 [2024-10-07 11:31:49.630348] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.414 [2024-10-07 11:31:49.630384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.414 [2024-10-07 11:31:49.630407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.414 [2024-10-07 11:31:49.630443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.414 [2024-10-07 11:31:49.630477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.414 [2024-10-07 11:31:49.630495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.414 [2024-10-07 11:31:49.630510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.414 [2024-10-07 11:31:49.630541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.414 [2024-10-07 11:31:49.633663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.414 [2024-10-07 11:31:49.633775] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.414 [2024-10-07 11:31:49.633806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.414 [2024-10-07 11:31:49.633825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.414 [2024-10-07 11:31:49.633858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.414 [2024-10-07 11:31:49.633891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.415 [2024-10-07 11:31:49.633910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.415 [2024-10-07 11:31:49.633924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.415 [2024-10-07 11:31:49.633955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.415 [2024-10-07 11:31:49.640782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.415 [2024-10-07 11:31:49.640894] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.415 [2024-10-07 11:31:49.640926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.415 [2024-10-07 11:31:49.640968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.415 [2024-10-07 11:31:49.641223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.415 [2024-10-07 11:31:49.641396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.415 [2024-10-07 11:31:49.641432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.415 [2024-10-07 11:31:49.641450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.415 [2024-10-07 11:31:49.641559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.415 [2024-10-07 11:31:49.644362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.415 [2024-10-07 11:31:49.644474] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.415 [2024-10-07 11:31:49.644505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.415 [2024-10-07 11:31:49.644522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.415 [2024-10-07 11:31:49.644554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.415 [2024-10-07 11:31:49.644587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.415 [2024-10-07 11:31:49.644605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.415 [2024-10-07 11:31:49.644619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.415 [2024-10-07 11:31:49.644649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.415 [2024-10-07 11:31:49.650874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.415 [2024-10-07 11:31:49.650984] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.415 [2024-10-07 11:31:49.651016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.415 [2024-10-07 11:31:49.651033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.415 [2024-10-07 11:31:49.651064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.415 [2024-10-07 11:31:49.651096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.415 [2024-10-07 11:31:49.651114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.415 [2024-10-07 11:31:49.651128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.415 [2024-10-07 11:31:49.651159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.415 [2024-10-07 11:31:49.654775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.415 [2024-10-07 11:31:49.654884] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.415 [2024-10-07 11:31:49.654915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.415 [2024-10-07 11:31:49.654932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.415 [2024-10-07 11:31:49.655183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.415 [2024-10-07 11:31:49.655359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.415 [2024-10-07 11:31:49.655411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.415 [2024-10-07 11:31:49.655429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.415 [2024-10-07 11:31:49.655538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.415 [2024-10-07 11:31:49.661653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.415 [2024-10-07 11:31:49.661766] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.415 [2024-10-07 11:31:49.661806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.415 [2024-10-07 11:31:49.661824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.415 [2024-10-07 11:31:49.661856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.415 [2024-10-07 11:31:49.661888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.415 [2024-10-07 11:31:49.661907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.415 [2024-10-07 11:31:49.661921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.415 [2024-10-07 11:31:49.661952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.415 [2024-10-07 11:31:49.664868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.415 [2024-10-07 11:31:49.664977] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.415 [2024-10-07 11:31:49.665008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.415 [2024-10-07 11:31:49.665025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.415 [2024-10-07 11:31:49.665056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.415 [2024-10-07 11:31:49.665088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.415 [2024-10-07 11:31:49.665106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.415 [2024-10-07 11:31:49.665121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.415 [2024-10-07 11:31:49.665150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.415 [2024-10-07 11:31:49.672217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.415 [2024-10-07 11:31:49.672344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.415 [2024-10-07 11:31:49.672377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.415 [2024-10-07 11:31:49.672394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.415 [2024-10-07 11:31:49.672427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.415 [2024-10-07 11:31:49.672459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.415 [2024-10-07 11:31:49.672477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.415 [2024-10-07 11:31:49.672492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.415 [2024-10-07 11:31:49.672522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.415 [2024-10-07 11:31:49.675573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.415 [2024-10-07 11:31:49.675687] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.415 [2024-10-07 11:31:49.675719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.415 [2024-10-07 11:31:49.675736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.415 [2024-10-07 11:31:49.675768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.415 [2024-10-07 11:31:49.675801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.415 [2024-10-07 11:31:49.675820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.415 [2024-10-07 11:31:49.675834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.415 [2024-10-07 11:31:49.675864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.415 [2024-10-07 11:31:49.682619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.415 [2024-10-07 11:31:49.682734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.415 [2024-10-07 11:31:49.682765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.415 [2024-10-07 11:31:49.682783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.415 [2024-10-07 11:31:49.683034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.415 [2024-10-07 11:31:49.683193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.415 [2024-10-07 11:31:49.683221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.415 [2024-10-07 11:31:49.683236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.415 [2024-10-07 11:31:49.683359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.415 [2024-10-07 11:31:49.686174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.415 [2024-10-07 11:31:49.686295] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.415 [2024-10-07 11:31:49.686340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.415 [2024-10-07 11:31:49.686359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.416 [2024-10-07 11:31:49.686392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.416 [2024-10-07 11:31:49.686425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.416 [2024-10-07 11:31:49.686443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.416 [2024-10-07 11:31:49.686457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.416 [2024-10-07 11:31:49.686487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.416 [2024-10-07 11:31:49.692709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.416 [2024-10-07 11:31:49.692829] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.416 [2024-10-07 11:31:49.692860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.416 [2024-10-07 11:31:49.692878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.416 [2024-10-07 11:31:49.692929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.416 [2024-10-07 11:31:49.692962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.416 [2024-10-07 11:31:49.692980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.416 [2024-10-07 11:31:49.692994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.416 [2024-10-07 11:31:49.693025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.416 [2024-10-07 11:31:49.696548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.416 [2024-10-07 11:31:49.696662] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.416 [2024-10-07 11:31:49.696702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.416 [2024-10-07 11:31:49.696719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.416 [2024-10-07 11:31:49.696970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.416 [2024-10-07 11:31:49.697130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.416 [2024-10-07 11:31:49.697162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.416 [2024-10-07 11:31:49.697179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.416 [2024-10-07 11:31:49.697285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.416 [2024-10-07 11:31:49.703413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.416 [2024-10-07 11:31:49.703529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.416 [2024-10-07 11:31:49.703561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.416 [2024-10-07 11:31:49.703578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.416 [2024-10-07 11:31:49.703611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.416 [2024-10-07 11:31:49.703643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.416 [2024-10-07 11:31:49.703661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.416 [2024-10-07 11:31:49.703676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.416 [2024-10-07 11:31:49.703706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.416 [2024-10-07 11:31:49.706636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.416 [2024-10-07 11:31:49.706744] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.416 [2024-10-07 11:31:49.706775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.416 [2024-10-07 11:31:49.706792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.416 [2024-10-07 11:31:49.706824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.416 [2024-10-07 11:31:49.706858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.416 [2024-10-07 11:31:49.706877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.416 [2024-10-07 11:31:49.706910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.416 [2024-10-07 11:31:49.706942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.416 [2024-10-07 11:31:49.713965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.416 [2024-10-07 11:31:49.714099] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.416 [2024-10-07 11:31:49.714132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.416 [2024-10-07 11:31:49.714150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.416 [2024-10-07 11:31:49.714183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.416 [2024-10-07 11:31:49.714215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.416 [2024-10-07 11:31:49.714234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.416 [2024-10-07 11:31:49.714248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.416 [2024-10-07 11:31:49.714279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.416 [2024-10-07 11:31:49.717355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.416 [2024-10-07 11:31:49.717465] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.416 [2024-10-07 11:31:49.717497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.416 [2024-10-07 11:31:49.717515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.416 [2024-10-07 11:31:49.717548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.416 [2024-10-07 11:31:49.717580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.416 [2024-10-07 11:31:49.717599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.416 [2024-10-07 11:31:49.717614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.416 [2024-10-07 11:31:49.717644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.416 [2024-10-07 11:31:49.724515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.416 [2024-10-07 11:31:49.724648] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.416 [2024-10-07 11:31:49.724681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.416 [2024-10-07 11:31:49.724699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.416 [2024-10-07 11:31:49.724954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.416 [2024-10-07 11:31:49.725103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.416 [2024-10-07 11:31:49.725139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.416 [2024-10-07 11:31:49.725158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.416 [2024-10-07 11:31:49.725267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.416 [2024-10-07 11:31:49.728052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.416 [2024-10-07 11:31:49.728164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.416 [2024-10-07 11:31:49.728222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.416 [2024-10-07 11:31:49.728241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.416 [2024-10-07 11:31:49.728274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.416 [2024-10-07 11:31:49.728307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.416 [2024-10-07 11:31:49.728340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.416 [2024-10-07 11:31:49.728356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.416 [2024-10-07 11:31:49.728388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.416 [2024-10-07 11:31:49.734615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.416 [2024-10-07 11:31:49.734730] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.416 [2024-10-07 11:31:49.734762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.416 [2024-10-07 11:31:49.734779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.416 [2024-10-07 11:31:49.734811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.416 [2024-10-07 11:31:49.734847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.416 [2024-10-07 11:31:49.734865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.416 [2024-10-07 11:31:49.734879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.416 [2024-10-07 11:31:49.734910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.416 [2024-10-07 11:31:49.738382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.416 [2024-10-07 11:31:49.738492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.416 [2024-10-07 11:31:49.738523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.416 [2024-10-07 11:31:49.738544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.416 [2024-10-07 11:31:49.738796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.416 [2024-10-07 11:31:49.738947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.416 [2024-10-07 11:31:49.738982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.416 [2024-10-07 11:31:49.739001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.416 [2024-10-07 11:31:49.739114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.416 [2024-10-07 11:31:49.745254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.416 [2024-10-07 11:31:49.745383] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.417 [2024-10-07 11:31:49.745415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.417 [2024-10-07 11:31:49.745433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.417 [2024-10-07 11:31:49.745465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.417 [2024-10-07 11:31:49.745521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.417 [2024-10-07 11:31:49.745541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.417 [2024-10-07 11:31:49.745555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.417 [2024-10-07 11:31:49.745587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.417 [2024-10-07 11:31:49.748472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.417 [2024-10-07 11:31:49.748583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.417 [2024-10-07 11:31:49.748614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.417 [2024-10-07 11:31:49.748631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.417 [2024-10-07 11:31:49.748663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.417 [2024-10-07 11:31:49.748695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.417 [2024-10-07 11:31:49.748713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.417 [2024-10-07 11:31:49.748727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.417 [2024-10-07 11:31:49.748757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.417 [2024-10-07 11:31:49.755771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.417 [2024-10-07 11:31:49.755887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.417 [2024-10-07 11:31:49.755919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.417 [2024-10-07 11:31:49.755937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.417 [2024-10-07 11:31:49.755969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.417 [2024-10-07 11:31:49.756003] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.417 [2024-10-07 11:31:49.756021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.417 [2024-10-07 11:31:49.756035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.417 [2024-10-07 11:31:49.756066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.417 [2024-10-07 11:31:49.759098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.417 [2024-10-07 11:31:49.759212] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.417 [2024-10-07 11:31:49.759243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.417 [2024-10-07 11:31:49.759261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.417 [2024-10-07 11:31:49.759293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.417 [2024-10-07 11:31:49.759342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.417 [2024-10-07 11:31:49.759363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.417 [2024-10-07 11:31:49.759378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.417 [2024-10-07 11:31:49.759426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.417 [2024-10-07 11:31:49.766161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.417 [2024-10-07 11:31:49.766273] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.417 [2024-10-07 11:31:49.766335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.417 [2024-10-07 11:31:49.766355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.417 [2024-10-07 11:31:49.766609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.417 [2024-10-07 11:31:49.766769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.417 [2024-10-07 11:31:49.766804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.417 [2024-10-07 11:31:49.766821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.417 [2024-10-07 11:31:49.766929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.417 [2024-10-07 11:31:49.769786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.417 [2024-10-07 11:31:49.769896] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.417 [2024-10-07 11:31:49.769928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.417 [2024-10-07 11:31:49.769945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.417 [2024-10-07 11:31:49.769977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.417 [2024-10-07 11:31:49.770009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.417 [2024-10-07 11:31:49.770027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.417 [2024-10-07 11:31:49.770041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.417 [2024-10-07 11:31:49.770071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.417 [2024-10-07 11:31:49.776273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.417 [2024-10-07 11:31:49.776397] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.417 [2024-10-07 11:31:49.776429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.417 [2024-10-07 11:31:49.776446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.417 [2024-10-07 11:31:49.776478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.417 [2024-10-07 11:31:49.776509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.417 [2024-10-07 11:31:49.776527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.417 [2024-10-07 11:31:49.776542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.417 [2024-10-07 11:31:49.776572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.417 [2024-10-07 11:31:49.780139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.417 [2024-10-07 11:31:49.780251] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.417 [2024-10-07 11:31:49.780282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.417 [2024-10-07 11:31:49.780331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.417 [2024-10-07 11:31:49.780589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.417 [2024-10-07 11:31:49.780749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.417 [2024-10-07 11:31:49.780784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.417 [2024-10-07 11:31:49.780801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.417 [2024-10-07 11:31:49.780909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.417 [2024-10-07 11:31:49.786974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.417 [2024-10-07 11:31:49.787090] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.417 [2024-10-07 11:31:49.787122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.417 [2024-10-07 11:31:49.787140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.417 [2024-10-07 11:31:49.787171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.417 [2024-10-07 11:31:49.787204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.417 [2024-10-07 11:31:49.787222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.417 [2024-10-07 11:31:49.787236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.417 [2024-10-07 11:31:49.787268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.417 [2024-10-07 11:31:49.790227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.417 [2024-10-07 11:31:49.790367] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.417 [2024-10-07 11:31:49.790401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.417 [2024-10-07 11:31:49.790418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.417 [2024-10-07 11:31:49.790451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.417 [2024-10-07 11:31:49.790483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.417 [2024-10-07 11:31:49.790501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.417 [2024-10-07 11:31:49.790516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.417 [2024-10-07 11:31:49.790546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.417 [2024-10-07 11:31:49.797642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.417 [2024-10-07 11:31:49.797789] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.417 [2024-10-07 11:31:49.797823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.417 [2024-10-07 11:31:49.797842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.417 [2024-10-07 11:31:49.797875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.417 [2024-10-07 11:31:49.797914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.417 [2024-10-07 11:31:49.797962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.417 [2024-10-07 11:31:49.797978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.418 [2024-10-07 11:31:49.798010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.418 [2024-10-07 11:31:49.801024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.418 [2024-10-07 11:31:49.801136] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.418 [2024-10-07 11:31:49.801168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.418 [2024-10-07 11:31:49.801186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.418 [2024-10-07 11:31:49.801219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.418 [2024-10-07 11:31:49.801252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.418 [2024-10-07 11:31:49.801270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.418 [2024-10-07 11:31:49.801285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.418 [2024-10-07 11:31:49.801328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.418 [2024-10-07 11:31:49.808090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.418 [2024-10-07 11:31:49.808227] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.418 [2024-10-07 11:31:49.808260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.418 [2024-10-07 11:31:49.808278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.418 [2024-10-07 11:31:49.808554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.418 [2024-10-07 11:31:49.808706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.418 [2024-10-07 11:31:49.808742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.418 [2024-10-07 11:31:49.808760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.418 [2024-10-07 11:31:49.808872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.418 [2024-10-07 11:31:49.811688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.418 [2024-10-07 11:31:49.811801] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.418 [2024-10-07 11:31:49.811832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.418 [2024-10-07 11:31:49.811849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.418 [2024-10-07 11:31:49.811881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.418 [2024-10-07 11:31:49.811913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.418 [2024-10-07 11:31:49.811931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.418 [2024-10-07 11:31:49.811946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.418 [2024-10-07 11:31:49.811976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.418 [2024-10-07 11:31:49.818190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.418 [2024-10-07 11:31:49.818329] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.418 [2024-10-07 11:31:49.818364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.418 [2024-10-07 11:31:49.818382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.418 [2024-10-07 11:31:49.818416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.418 [2024-10-07 11:31:49.818449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.418 [2024-10-07 11:31:49.818467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.418 [2024-10-07 11:31:49.818489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.418 [2024-10-07 11:31:49.818520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.418 [2024-10-07 11:31:49.822003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.418 [2024-10-07 11:31:49.822129] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.418 [2024-10-07 11:31:49.822161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.418 [2024-10-07 11:31:49.822179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.418 [2024-10-07 11:31:49.822461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.418 [2024-10-07 11:31:49.822647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.418 [2024-10-07 11:31:49.822679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.418 [2024-10-07 11:31:49.822696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.418 [2024-10-07 11:31:49.822802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.418 [2024-10-07 11:31:49.828919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.418 [2024-10-07 11:31:49.829039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.418 [2024-10-07 11:31:49.829072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.418 [2024-10-07 11:31:49.829089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.418 [2024-10-07 11:31:49.829122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.418 [2024-10-07 11:31:49.829154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.418 [2024-10-07 11:31:49.829172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.418 [2024-10-07 11:31:49.829186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.418 [2024-10-07 11:31:49.829216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.418 [2024-10-07 11:31:49.832089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.418 [2024-10-07 11:31:49.832201] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.418 [2024-10-07 11:31:49.832233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.418 [2024-10-07 11:31:49.832250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.418 [2024-10-07 11:31:49.832302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.418 [2024-10-07 11:31:49.832351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.418 [2024-10-07 11:31:49.832370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.418 [2024-10-07 11:31:49.832385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.418 [2024-10-07 11:31:49.832414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.418 [2024-10-07 11:31:49.839524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.418 [2024-10-07 11:31:49.839640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.418 [2024-10-07 11:31:49.839671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.418 [2024-10-07 11:31:49.839689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.418 [2024-10-07 11:31:49.839721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.418 [2024-10-07 11:31:49.839753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.418 [2024-10-07 11:31:49.839771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.418 [2024-10-07 11:31:49.839785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.418 [2024-10-07 11:31:49.839816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.418 [2024-10-07 11:31:49.842870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.418 [2024-10-07 11:31:49.842987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.418 [2024-10-07 11:31:49.843019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.418 [2024-10-07 11:31:49.843037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.418 [2024-10-07 11:31:49.843069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.418 [2024-10-07 11:31:49.843102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.418 [2024-10-07 11:31:49.843120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.418 [2024-10-07 11:31:49.843135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.418 [2024-10-07 11:31:49.843165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.418 [2024-10-07 11:31:49.849844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.418 [2024-10-07 11:31:49.849966] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.418 [2024-10-07 11:31:49.849998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.418 [2024-10-07 11:31:49.850016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.418 [2024-10-07 11:31:49.850267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.418 [2024-10-07 11:31:49.850454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.418 [2024-10-07 11:31:49.850491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.418 [2024-10-07 11:31:49.850525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.418 [2024-10-07 11:31:49.850636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.418 [2024-10-07 11:31:49.853409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.418 [2024-10-07 11:31:49.853521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.418 [2024-10-07 11:31:49.853552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.418 [2024-10-07 11:31:49.853570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.418 [2024-10-07 11:31:49.853601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.418 [2024-10-07 11:31:49.853634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.418 [2024-10-07 11:31:49.853652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.419 [2024-10-07 11:31:49.853666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.419 [2024-10-07 11:31:49.853697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.419 [2024-10-07 11:31:49.859939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.419 [2024-10-07 11:31:49.860054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.419 [2024-10-07 11:31:49.860086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.419 [2024-10-07 11:31:49.860103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.419 [2024-10-07 11:31:49.860135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.419 [2024-10-07 11:31:49.860166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.419 [2024-10-07 11:31:49.860184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.419 [2024-10-07 11:31:49.860198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.419 [2024-10-07 11:31:49.860229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.419 [2024-10-07 11:31:49.863712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.419 [2024-10-07 11:31:49.863824] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.419 [2024-10-07 11:31:49.863855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.419 [2024-10-07 11:31:49.863873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.419 [2024-10-07 11:31:49.864131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.419 [2024-10-07 11:31:49.864310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.419 [2024-10-07 11:31:49.864358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.419 [2024-10-07 11:31:49.864375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.419 [2024-10-07 11:31:49.864483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.419 [2024-10-07 11:31:49.870564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.419 [2024-10-07 11:31:49.870677] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.419 [2024-10-07 11:31:49.870724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.419 [2024-10-07 11:31:49.870744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.419 [2024-10-07 11:31:49.870777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.419 [2024-10-07 11:31:49.870810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.419 [2024-10-07 11:31:49.870828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.419 [2024-10-07 11:31:49.870842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.419 [2024-10-07 11:31:49.870872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.419 [2024-10-07 11:31:49.873803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.419 [2024-10-07 11:31:49.873913] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.419 [2024-10-07 11:31:49.873944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.419 [2024-10-07 11:31:49.873961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.419 [2024-10-07 11:31:49.873992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.419 [2024-10-07 11:31:49.874025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.419 [2024-10-07 11:31:49.874042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.419 [2024-10-07 11:31:49.874057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.419 [2024-10-07 11:31:49.874087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.419 [2024-10-07 11:31:49.881097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.419 [2024-10-07 11:31:49.881212] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.419 [2024-10-07 11:31:49.881244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.419 [2024-10-07 11:31:49.881262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.419 [2024-10-07 11:31:49.881293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.419 [2024-10-07 11:31:49.881343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.419 [2024-10-07 11:31:49.881364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.419 [2024-10-07 11:31:49.881379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.419 [2024-10-07 11:31:49.881409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.419 [2024-10-07 11:31:49.884424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.419 [2024-10-07 11:31:49.884537] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.419 [2024-10-07 11:31:49.884568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.419 [2024-10-07 11:31:49.884586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.419 [2024-10-07 11:31:49.884617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.419 [2024-10-07 11:31:49.884668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.419 [2024-10-07 11:31:49.884687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.419 [2024-10-07 11:31:49.884701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.419 [2024-10-07 11:31:49.884731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.419 [2024-10-07 11:31:49.891423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.419 [2024-10-07 11:31:49.891537] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.419 [2024-10-07 11:31:49.891568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.419 [2024-10-07 11:31:49.891585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.419 [2024-10-07 11:31:49.891840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.419 [2024-10-07 11:31:49.891987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.419 [2024-10-07 11:31:49.892024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.419 [2024-10-07 11:31:49.892042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.419 [2024-10-07 11:31:49.892150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.419 [2024-10-07 11:31:49.894971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.419 [2024-10-07 11:31:49.895082] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.419 [2024-10-07 11:31:49.895113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.419 [2024-10-07 11:31:49.895131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.419 [2024-10-07 11:31:49.895162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.419 [2024-10-07 11:31:49.895194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.419 [2024-10-07 11:31:49.895212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.419 [2024-10-07 11:31:49.895226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.419 [2024-10-07 11:31:49.895256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.419 [2024-10-07 11:31:49.901513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.419 [2024-10-07 11:31:49.901624] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.419 [2024-10-07 11:31:49.901656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.419 [2024-10-07 11:31:49.901673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.419 [2024-10-07 11:31:49.901704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.419 [2024-10-07 11:31:49.901736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.419 [2024-10-07 11:31:49.901753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.419 [2024-10-07 11:31:49.901767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.419 [2024-10-07 11:31:49.901815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.419 [2024-10-07 11:31:49.905213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.419 [2024-10-07 11:31:49.905857] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.419 [2024-10-07 11:31:49.905903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.419 [2024-10-07 11:31:49.905923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.419 [2024-10-07 11:31:49.906182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.419 [2024-10-07 11:31:49.906359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.419 [2024-10-07 11:31:49.906393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.420 [2024-10-07 11:31:49.906410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.420 [2024-10-07 11:31:49.906518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.420 [2024-10-07 11:31:49.912103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.420 [2024-10-07 11:31:49.912217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.420 [2024-10-07 11:31:49.912249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.420 [2024-10-07 11:31:49.912266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.420 [2024-10-07 11:31:49.912298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.420 [2024-10-07 11:31:49.912348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.420 [2024-10-07 11:31:49.912369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.420 [2024-10-07 11:31:49.912383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.420 [2024-10-07 11:31:49.912414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.420 [2024-10-07 11:31:49.915303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.420 [2024-10-07 11:31:49.915438] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.420 [2024-10-07 11:31:49.915469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.420 [2024-10-07 11:31:49.915486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.420 [2024-10-07 11:31:49.915518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.420 [2024-10-07 11:31:49.915550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.420 [2024-10-07 11:31:49.915568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.420 [2024-10-07 11:31:49.915582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.420 [2024-10-07 11:31:49.915612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.420 [2024-10-07 11:31:49.922596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.420 [2024-10-07 11:31:49.922711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.420 [2024-10-07 11:31:49.922743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.420 [2024-10-07 11:31:49.922786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.420 [2024-10-07 11:31:49.922823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.420 [2024-10-07 11:31:49.922855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.420 [2024-10-07 11:31:49.922873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.420 [2024-10-07 11:31:49.922887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.420 [2024-10-07 11:31:49.922918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.420 [2024-10-07 11:31:49.925912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.420 [2024-10-07 11:31:49.926023] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.420 [2024-10-07 11:31:49.926054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.420 [2024-10-07 11:31:49.926071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.420 [2024-10-07 11:31:49.926103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.420 [2024-10-07 11:31:49.926135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.420 [2024-10-07 11:31:49.926154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.420 [2024-10-07 11:31:49.926168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.420 [2024-10-07 11:31:49.926198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.420 [2024-10-07 11:31:49.932967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.420 [2024-10-07 11:31:49.933101] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.420 [2024-10-07 11:31:49.933132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.420 [2024-10-07 11:31:49.933150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.420 [2024-10-07 11:31:49.933419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.420 [2024-10-07 11:31:49.933576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.420 [2024-10-07 11:31:49.933611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.420 [2024-10-07 11:31:49.933629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.420 [2024-10-07 11:31:49.933737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.420 [2024-10-07 11:31:49.936513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.420 [2024-10-07 11:31:49.936624] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.420 [2024-10-07 11:31:49.936655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.420 [2024-10-07 11:31:49.936672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.420 [2024-10-07 11:31:49.936704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.420 [2024-10-07 11:31:49.936735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.420 [2024-10-07 11:31:49.936770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.420 [2024-10-07 11:31:49.936786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.420 [2024-10-07 11:31:49.936817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.420 [2024-10-07 11:31:49.943062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.420 [2024-10-07 11:31:49.943182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.420 [2024-10-07 11:31:49.943213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.420 [2024-10-07 11:31:49.943231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.420 [2024-10-07 11:31:49.943263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.420 [2024-10-07 11:31:49.943303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.420 [2024-10-07 11:31:49.943336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.420 [2024-10-07 11:31:49.943352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.420 [2024-10-07 11:31:49.943392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.420 [2024-10-07 11:31:49.946877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.420 [2024-10-07 11:31:49.946989] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.420 [2024-10-07 11:31:49.947020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.420 [2024-10-07 11:31:49.947037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.420 [2024-10-07 11:31:49.947290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.420 [2024-10-07 11:31:49.947452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.420 [2024-10-07 11:31:49.947500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.420 [2024-10-07 11:31:49.947518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.420 [2024-10-07 11:31:49.947626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.420 [2024-10-07 11:31:49.953802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.420 [2024-10-07 11:31:49.953930] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.420 [2024-10-07 11:31:49.953962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.420 [2024-10-07 11:31:49.953981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.420 [2024-10-07 11:31:49.954013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.420 [2024-10-07 11:31:49.954046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.420 [2024-10-07 11:31:49.954064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.420 [2024-10-07 11:31:49.954079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.421 [2024-10-07 11:31:49.954110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.421 [2024-10-07 11:31:49.956998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.421 [2024-10-07 11:31:49.957109] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.421 [2024-10-07 11:31:49.957141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.421 [2024-10-07 11:31:49.957159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.421 [2024-10-07 11:31:49.957192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.421 [2024-10-07 11:31:49.957224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.421 [2024-10-07 11:31:49.957242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.421 [2024-10-07 11:31:49.957257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.421 [2024-10-07 11:31:49.957286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.421 [2024-10-07 11:31:49.964425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.421 [2024-10-07 11:31:49.964541] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.421 [2024-10-07 11:31:49.964573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.421 [2024-10-07 11:31:49.964590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.421 [2024-10-07 11:31:49.964622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.421 [2024-10-07 11:31:49.964654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.421 [2024-10-07 11:31:49.964672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.421 [2024-10-07 11:31:49.964686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.421 [2024-10-07 11:31:49.964721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.421 [2024-10-07 11:31:49.967786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.421 [2024-10-07 11:31:49.967904] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.421 [2024-10-07 11:31:49.967935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.421 [2024-10-07 11:31:49.967952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.421 [2024-10-07 11:31:49.967984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.421 [2024-10-07 11:31:49.968017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.421 [2024-10-07 11:31:49.968035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.421 [2024-10-07 11:31:49.968050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.421 [2024-10-07 11:31:49.968080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.421 [2024-10-07 11:31:49.974849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.421 [2024-10-07 11:31:49.974978] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.421 [2024-10-07 11:31:49.975010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.421 [2024-10-07 11:31:49.975028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.421 [2024-10-07 11:31:49.975304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.421 [2024-10-07 11:31:49.975479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.421 [2024-10-07 11:31:49.975511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.421 [2024-10-07 11:31:49.975528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.421 [2024-10-07 11:31:49.975639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.421 [2024-10-07 11:31:49.978445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.421 [2024-10-07 11:31:49.978558] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.421 [2024-10-07 11:31:49.978590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.421 [2024-10-07 11:31:49.978607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.421 [2024-10-07 11:31:49.978639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.421 [2024-10-07 11:31:49.978670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.421 [2024-10-07 11:31:49.978688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.421 [2024-10-07 11:31:49.978703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.421 [2024-10-07 11:31:49.978733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.421 [2024-10-07 11:31:49.984953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.421 [2024-10-07 11:31:49.985068] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.421 [2024-10-07 11:31:49.985100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.421 [2024-10-07 11:31:49.985117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.421 [2024-10-07 11:31:49.985150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.421 [2024-10-07 11:31:49.985182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.421 [2024-10-07 11:31:49.985199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.421 [2024-10-07 11:31:49.985213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.421 [2024-10-07 11:31:49.985244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.421 [2024-10-07 11:31:49.988833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.421 [2024-10-07 11:31:49.988951] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.421 [2024-10-07 11:31:49.988982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.421 [2024-10-07 11:31:49.988999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.421 [2024-10-07 11:31:49.989251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.421 [2024-10-07 11:31:49.989426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.421 [2024-10-07 11:31:49.989462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.421 [2024-10-07 11:31:49.989499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.421 [2024-10-07 11:31:49.989608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.421 [2024-10-07 11:31:49.995715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.421 [2024-10-07 11:31:49.995828] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.421 [2024-10-07 11:31:49.995860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.421 [2024-10-07 11:31:49.995888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.421 [2024-10-07 11:31:49.995920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.421 [2024-10-07 11:31:49.995952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.421 [2024-10-07 11:31:49.995970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.421 [2024-10-07 11:31:49.995984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.421 [2024-10-07 11:31:49.996014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.421 [2024-10-07 11:31:49.998920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.421 [2024-10-07 11:31:49.999032] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.421 [2024-10-07 11:31:49.999063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.421 [2024-10-07 11:31:49.999081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.421 [2024-10-07 11:31:49.999112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.421 [2024-10-07 11:31:49.999144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.421 [2024-10-07 11:31:49.999162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.421 [2024-10-07 11:31:49.999176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.421 [2024-10-07 11:31:49.999205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.421 [2024-10-07 11:31:50.006348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.421 [2024-10-07 11:31:50.006473] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.421 [2024-10-07 11:31:50.006505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.421 [2024-10-07 11:31:50.006523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.421 [2024-10-07 11:31:50.006556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.421 [2024-10-07 11:31:50.006587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.421 [2024-10-07 11:31:50.006605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.421 [2024-10-07 11:31:50.006621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.421 [2024-10-07 11:31:50.006651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.421 [2024-10-07 11:31:50.009684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.421 [2024-10-07 11:31:50.009830] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.421 [2024-10-07 11:31:50.009862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.421 [2024-10-07 11:31:50.009880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.421 [2024-10-07 11:31:50.009913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.421 [2024-10-07 11:31:50.009946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.421 [2024-10-07 11:31:50.009964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.422 [2024-10-07 11:31:50.009978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.422 [2024-10-07 11:31:50.010008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.422 [2024-10-07 11:31:50.016799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.422 [2024-10-07 11:31:50.016939] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.422 [2024-10-07 11:31:50.016971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.422 [2024-10-07 11:31:50.016990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.422 [2024-10-07 11:31:50.017247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.422 [2024-10-07 11:31:50.017425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.422 [2024-10-07 11:31:50.017461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.422 [2024-10-07 11:31:50.017480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.422 [2024-10-07 11:31:50.017591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.422 [2024-10-07 11:31:50.020446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.422 [2024-10-07 11:31:50.020561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.422 [2024-10-07 11:31:50.020594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.422 [2024-10-07 11:31:50.020612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.422 [2024-10-07 11:31:50.020644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.422 [2024-10-07 11:31:50.020676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.422 [2024-10-07 11:31:50.020695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.422 [2024-10-07 11:31:50.020720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.422 [2024-10-07 11:31:50.020750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.422 [2024-10-07 11:31:50.027054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.422 [2024-10-07 11:31:50.027198] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.422 [2024-10-07 11:31:50.027230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.422 [2024-10-07 11:31:50.027248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.422 [2024-10-07 11:31:50.027288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.422 [2024-10-07 11:31:50.027365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.422 [2024-10-07 11:31:50.027387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.422 [2024-10-07 11:31:50.027402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.422 [2024-10-07 11:31:50.027434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.422 [2024-10-07 11:31:50.031004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.422 [2024-10-07 11:31:50.031117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.422 [2024-10-07 11:31:50.031149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.422 [2024-10-07 11:31:50.031167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.422 [2024-10-07 11:31:50.031435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.422 [2024-10-07 11:31:50.031590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.422 [2024-10-07 11:31:50.031626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.422 [2024-10-07 11:31:50.031644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.422 [2024-10-07 11:31:50.031751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.422 [2024-10-07 11:31:50.037940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.422 [2024-10-07 11:31:50.038054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.422 [2024-10-07 11:31:50.038086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.422 [2024-10-07 11:31:50.038104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.422 [2024-10-07 11:31:50.038136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.422 [2024-10-07 11:31:50.038176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.422 [2024-10-07 11:31:50.038195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.422 [2024-10-07 11:31:50.038209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.422 [2024-10-07 11:31:50.038240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.422 [2024-10-07 11:31:50.041155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.422 [2024-10-07 11:31:50.041265] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.422 [2024-10-07 11:31:50.041296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.422 [2024-10-07 11:31:50.041329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.422 [2024-10-07 11:31:50.041366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.422 [2024-10-07 11:31:50.041398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.422 [2024-10-07 11:31:50.041417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.422 [2024-10-07 11:31:50.041432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.422 [2024-10-07 11:31:50.041478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.422 [2024-10-07 11:31:50.048527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.422 [2024-10-07 11:31:50.048642] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.422 [2024-10-07 11:31:50.048674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.422 [2024-10-07 11:31:50.048692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.422 [2024-10-07 11:31:50.048724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.422 [2024-10-07 11:31:50.048756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.422 [2024-10-07 11:31:50.048774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.422 [2024-10-07 11:31:50.048788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.422 [2024-10-07 11:31:50.048819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.422 [2024-10-07 11:31:50.051905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.422 [2024-10-07 11:31:50.052018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.422 [2024-10-07 11:31:50.052049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.422 [2024-10-07 11:31:50.052066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.422 [2024-10-07 11:31:50.052100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.422 [2024-10-07 11:31:50.052132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.422 [2024-10-07 11:31:50.052150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.422 [2024-10-07 11:31:50.052164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.422 [2024-10-07 11:31:50.052194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.422 [2024-10-07 11:31:50.058917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.422 [2024-10-07 11:31:50.059030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.422 [2024-10-07 11:31:50.059062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.422 [2024-10-07 11:31:50.059080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.422 [2024-10-07 11:31:50.059347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.422 [2024-10-07 11:31:50.059504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.422 [2024-10-07 11:31:50.059539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.422 [2024-10-07 11:31:50.059558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.422 [2024-10-07 11:31:50.059666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.422 [2024-10-07 11:31:50.062500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.422 [2024-10-07 11:31:50.062612] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.422 [2024-10-07 11:31:50.062644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.422 [2024-10-07 11:31:50.062678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.422 [2024-10-07 11:31:50.062712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.422 [2024-10-07 11:31:50.062745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.422 [2024-10-07 11:31:50.062767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.422 [2024-10-07 11:31:50.062781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.422 [2024-10-07 11:31:50.062811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.422 [2024-10-07 11:31:50.069013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.422 [2024-10-07 11:31:50.069141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.422 [2024-10-07 11:31:50.069173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.422 [2024-10-07 11:31:50.069191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.422 [2024-10-07 11:31:50.069224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.423 [2024-10-07 11:31:50.069256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.423 [2024-10-07 11:31:50.069274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.423 [2024-10-07 11:31:50.069288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.423 [2024-10-07 11:31:50.069337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.423 [2024-10-07 11:31:50.072811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.423 [2024-10-07 11:31:50.072926] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.423 [2024-10-07 11:31:50.072957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.423 [2024-10-07 11:31:50.072975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.423 [2024-10-07 11:31:50.073226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.423 [2024-10-07 11:31:50.073392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.423 [2024-10-07 11:31:50.073429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.423 [2024-10-07 11:31:50.073447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.423 [2024-10-07 11:31:50.073555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.423 [2024-10-07 11:31:50.079709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.423 [2024-10-07 11:31:50.079838] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.423 [2024-10-07 11:31:50.079871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.423 [2024-10-07 11:31:50.079889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.423 [2024-10-07 11:31:50.079921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.423 [2024-10-07 11:31:50.079953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.423 [2024-10-07 11:31:50.079988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.423 [2024-10-07 11:31:50.080004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.423 [2024-10-07 11:31:50.080036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.423 [2024-10-07 11:31:50.082908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.423 [2024-10-07 11:31:50.083022] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.423 [2024-10-07 11:31:50.083054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.423 [2024-10-07 11:31:50.083071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.423 [2024-10-07 11:31:50.083107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.423 [2024-10-07 11:31:50.083142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.423 [2024-10-07 11:31:50.083160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.423 [2024-10-07 11:31:50.083174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.423 [2024-10-07 11:31:50.083204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.423 [2024-10-07 11:31:50.090198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.423 [2024-10-07 11:31:50.090341] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.423 [2024-10-07 11:31:50.090374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.423 [2024-10-07 11:31:50.090391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.423 [2024-10-07 11:31:50.090425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.423 [2024-10-07 11:31:50.090458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.423 [2024-10-07 11:31:50.090476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.423 [2024-10-07 11:31:50.090490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.423 [2024-10-07 11:31:50.090521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.423 [2024-10-07 11:31:50.093533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.423 [2024-10-07 11:31:50.093646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.423 [2024-10-07 11:31:50.093677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.423 [2024-10-07 11:31:50.093694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.423 [2024-10-07 11:31:50.093726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.423 [2024-10-07 11:31:50.093758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.423 [2024-10-07 11:31:50.093778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.423 [2024-10-07 11:31:50.093792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.423 [2024-10-07 11:31:50.093822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.423 [2024-10-07 11:31:50.100485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.423 [2024-10-07 11:31:50.100599] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.423 [2024-10-07 11:31:50.100630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.423 [2024-10-07 11:31:50.100647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.423 [2024-10-07 11:31:50.100899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.423 [2024-10-07 11:31:50.101076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.423 [2024-10-07 11:31:50.101113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.423 [2024-10-07 11:31:50.101131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.423 [2024-10-07 11:31:50.101239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.423 [2024-10-07 11:31:50.104030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.423 [2024-10-07 11:31:50.104141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.423 [2024-10-07 11:31:50.104172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.423 [2024-10-07 11:31:50.104189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.423 [2024-10-07 11:31:50.104221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.423 [2024-10-07 11:31:50.104252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.423 [2024-10-07 11:31:50.104271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.423 [2024-10-07 11:31:50.104285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.423 [2024-10-07 11:31:50.104331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.423 [2024-10-07 11:31:50.110581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.423 [2024-10-07 11:31:50.110695] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.423 [2024-10-07 11:31:50.110727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.423 [2024-10-07 11:31:50.110745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.423 [2024-10-07 11:31:50.110777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.423 [2024-10-07 11:31:50.110809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.423 [2024-10-07 11:31:50.110828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.423 [2024-10-07 11:31:50.110842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.423 [2024-10-07 11:31:50.110872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.423 [2024-10-07 11:31:50.114331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.423 [2024-10-07 11:31:50.114444] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.423 [2024-10-07 11:31:50.114476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.423 [2024-10-07 11:31:50.114494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.423 [2024-10-07 11:31:50.114763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.423 [2024-10-07 11:31:50.114923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.423 [2024-10-07 11:31:50.114955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.423 [2024-10-07 11:31:50.114972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.423 [2024-10-07 11:31:50.115082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.423 [2024-10-07 11:31:50.121180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.423 [2024-10-07 11:31:50.121296] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.423 [2024-10-07 11:31:50.121342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.423 [2024-10-07 11:31:50.121361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.423 [2024-10-07 11:31:50.121393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.423 [2024-10-07 11:31:50.121426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.423 [2024-10-07 11:31:50.121444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.423 [2024-10-07 11:31:50.121459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.423 [2024-10-07 11:31:50.121489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.423 [2024-10-07 11:31:50.124419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.423 [2024-10-07 11:31:50.124533] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.423 [2024-10-07 11:31:50.124565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.423 [2024-10-07 11:31:50.124583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.424 [2024-10-07 11:31:50.124615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.424 [2024-10-07 11:31:50.124648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.424 [2024-10-07 11:31:50.124666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.424 [2024-10-07 11:31:50.124681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.424 [2024-10-07 11:31:50.124711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.424 [2024-10-07 11:31:50.132177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.424 [2024-10-07 11:31:50.132343] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.424 [2024-10-07 11:31:50.132377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.424 [2024-10-07 11:31:50.132396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.424 [2024-10-07 11:31:50.132431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.424 [2024-10-07 11:31:50.132464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.424 [2024-10-07 11:31:50.132483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.424 [2024-10-07 11:31:50.132535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.424 [2024-10-07 11:31:50.132570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.424 [2024-10-07 11:31:50.135608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.424 [2024-10-07 11:31:50.135734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.424 [2024-10-07 11:31:50.135766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.424 [2024-10-07 11:31:50.135784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.424 [2024-10-07 11:31:50.135817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.424 [2024-10-07 11:31:50.135849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.424 [2024-10-07 11:31:50.135868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.424 [2024-10-07 11:31:50.135883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.424 [2024-10-07 11:31:50.135913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.424 [2024-10-07 11:31:50.142824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.424 [2024-10-07 11:31:50.142972] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.424 [2024-10-07 11:31:50.143006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.424 [2024-10-07 11:31:50.143025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.424 [2024-10-07 11:31:50.143287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.424 [2024-10-07 11:31:50.143454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.424 [2024-10-07 11:31:50.143481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.424 [2024-10-07 11:31:50.143498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.424 [2024-10-07 11:31:50.143607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.424 [2024-10-07 11:31:50.146396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.424 [2024-10-07 11:31:50.146510] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.424 [2024-10-07 11:31:50.146541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.424 [2024-10-07 11:31:50.146560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.424 [2024-10-07 11:31:50.146593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.424 [2024-10-07 11:31:50.146625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.424 [2024-10-07 11:31:50.146643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.424 [2024-10-07 11:31:50.146658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.424 [2024-10-07 11:31:50.146689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.424 [2024-10-07 11:31:50.152933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.424 [2024-10-07 11:31:50.153074] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.424 [2024-10-07 11:31:50.153107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.424 [2024-10-07 11:31:50.153125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.424 [2024-10-07 11:31:50.153157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.424 [2024-10-07 11:31:50.153190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.424 [2024-10-07 11:31:50.153208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.424 [2024-10-07 11:31:50.153222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.424 [2024-10-07 11:31:50.153253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.424 [2024-10-07 11:31:50.156823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.424 [2024-10-07 11:31:50.156936] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.424 [2024-10-07 11:31:50.156968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.424 [2024-10-07 11:31:50.156985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.424 [2024-10-07 11:31:50.157236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.424 [2024-10-07 11:31:50.157409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.424 [2024-10-07 11:31:50.157446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.424 [2024-10-07 11:31:50.157464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.424 [2024-10-07 11:31:50.157572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.424 [2024-10-07 11:31:50.163682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.424 [2024-10-07 11:31:50.163798] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.424 [2024-10-07 11:31:50.163829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.424 [2024-10-07 11:31:50.163847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.424 [2024-10-07 11:31:50.163879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.424 [2024-10-07 11:31:50.163911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.424 [2024-10-07 11:31:50.163929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.424 [2024-10-07 11:31:50.163943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.424 [2024-10-07 11:31:50.163973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.424 [2024-10-07 11:31:50.166916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.424 [2024-10-07 11:31:50.167028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.424 [2024-10-07 11:31:50.167059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.424 [2024-10-07 11:31:50.167076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.424 [2024-10-07 11:31:50.167108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.424 [2024-10-07 11:31:50.167157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.424 [2024-10-07 11:31:50.167176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.424 [2024-10-07 11:31:50.167191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.424 [2024-10-07 11:31:50.167220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.424 [2024-10-07 11:31:50.174190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.424 [2024-10-07 11:31:50.174330] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.424 [2024-10-07 11:31:50.174363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.424 [2024-10-07 11:31:50.174381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.424 [2024-10-07 11:31:50.174415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.424 [2024-10-07 11:31:50.174448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.424 [2024-10-07 11:31:50.174465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.424 [2024-10-07 11:31:50.174480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.424 [2024-10-07 11:31:50.174510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.424 [2024-10-07 11:31:50.177551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.424 [2024-10-07 11:31:50.177661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.425 [2024-10-07 11:31:50.177692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.425 [2024-10-07 11:31:50.177709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.425 [2024-10-07 11:31:50.177741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.425 [2024-10-07 11:31:50.177773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.425 [2024-10-07 11:31:50.177792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.425 [2024-10-07 11:31:50.177806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.425 [2024-10-07 11:31:50.177836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.425 [2024-10-07 11:31:50.184557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.425 [2024-10-07 11:31:50.184670] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.425 [2024-10-07 11:31:50.184701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.425 [2024-10-07 11:31:50.184718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.425 [2024-10-07 11:31:50.184985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.425 [2024-10-07 11:31:50.185141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.425 [2024-10-07 11:31:50.185176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.425 [2024-10-07 11:31:50.185194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.425 [2024-10-07 11:31:50.185335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.425 [2024-10-07 11:31:50.188117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.425 [2024-10-07 11:31:50.188228] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.425 [2024-10-07 11:31:50.188260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.425 [2024-10-07 11:31:50.188277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.425 [2024-10-07 11:31:50.188309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.425 [2024-10-07 11:31:50.188358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.425 [2024-10-07 11:31:50.188377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.425 [2024-10-07 11:31:50.188391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.425 [2024-10-07 11:31:50.188421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.425 [2024-10-07 11:31:50.194647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.425 [2024-10-07 11:31:50.194761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.425 [2024-10-07 11:31:50.194792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.425 [2024-10-07 11:31:50.194810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.425 [2024-10-07 11:31:50.194842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.425 [2024-10-07 11:31:50.194874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.425 [2024-10-07 11:31:50.194892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.425 [2024-10-07 11:31:50.194906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.425 [2024-10-07 11:31:50.194937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.425 [2024-10-07 11:31:50.198495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.425 [2024-10-07 11:31:50.198608] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.425 [2024-10-07 11:31:50.198639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.425 [2024-10-07 11:31:50.198656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.425 [2024-10-07 11:31:50.198912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.425 [2024-10-07 11:31:50.199073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.425 [2024-10-07 11:31:50.199108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.425 [2024-10-07 11:31:50.199125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.425 [2024-10-07 11:31:50.199243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.425 [2024-10-07 11:31:50.205298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.425 [2024-10-07 11:31:50.205427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.425 [2024-10-07 11:31:50.205459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.425 [2024-10-07 11:31:50.205494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.425 [2024-10-07 11:31:50.205528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.425 [2024-10-07 11:31:50.205560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.425 [2024-10-07 11:31:50.205578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.425 [2024-10-07 11:31:50.205592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.425 [2024-10-07 11:31:50.205623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.425 [2024-10-07 11:31:50.208584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.425 [2024-10-07 11:31:50.208695] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.425 [2024-10-07 11:31:50.208726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.425 [2024-10-07 11:31:50.208743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.425 [2024-10-07 11:31:50.208774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.425 [2024-10-07 11:31:50.208806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.425 [2024-10-07 11:31:50.208824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.425 [2024-10-07 11:31:50.208837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.425 [2024-10-07 11:31:50.208868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.425 [2024-10-07 11:31:50.215864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.425 [2024-10-07 11:31:50.215979] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.425 [2024-10-07 11:31:50.216011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.425 [2024-10-07 11:31:50.216028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.425 [2024-10-07 11:31:50.216060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.425 [2024-10-07 11:31:50.216092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.425 [2024-10-07 11:31:50.216109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.425 [2024-10-07 11:31:50.216123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.425 [2024-10-07 11:31:50.216154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.425 [2024-10-07 11:31:50.219222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.425 [2024-10-07 11:31:50.219346] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.425 [2024-10-07 11:31:50.219377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.425 [2024-10-07 11:31:50.219394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.425 [2024-10-07 11:31:50.219426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.425 [2024-10-07 11:31:50.219458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.425 [2024-10-07 11:31:50.219493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.425 [2024-10-07 11:31:50.219509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.425 [2024-10-07 11:31:50.219540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.425 [2024-10-07 11:31:50.226176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.425 [2024-10-07 11:31:50.226301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.425 [2024-10-07 11:31:50.226347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.425 [2024-10-07 11:31:50.226366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.425 [2024-10-07 11:31:50.226620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.425 [2024-10-07 11:31:50.226787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.425 [2024-10-07 11:31:50.226822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.425 [2024-10-07 11:31:50.226840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.425 [2024-10-07 11:31:50.226948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.425 [2024-10-07 11:31:50.229730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.425 [2024-10-07 11:31:50.229840] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.425 [2024-10-07 11:31:50.229872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.425 [2024-10-07 11:31:50.229889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.425 [2024-10-07 11:31:50.229920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.425 [2024-10-07 11:31:50.229952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.425 [2024-10-07 11:31:50.229970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.425 [2024-10-07 11:31:50.229984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.425 [2024-10-07 11:31:50.230014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.425 [2024-10-07 11:31:50.236265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.426 [2024-10-07 11:31:50.236389] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.426 [2024-10-07 11:31:50.236421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.426 [2024-10-07 11:31:50.236438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.426 [2024-10-07 11:31:50.236470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.426 [2024-10-07 11:31:50.236502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.426 [2024-10-07 11:31:50.236519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.426 [2024-10-07 11:31:50.236534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.426 [2024-10-07 11:31:50.236565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.426 [2024-10-07 11:31:50.240006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.426 [2024-10-07 11:31:50.240122] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.426 [2024-10-07 11:31:50.240153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.426 [2024-10-07 11:31:50.240171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.426 [2024-10-07 11:31:50.240453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.426 [2024-10-07 11:31:50.240610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.426 [2024-10-07 11:31:50.240646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.426 [2024-10-07 11:31:50.240664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.426 [2024-10-07 11:31:50.240772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.426 [2024-10-07 11:31:50.247202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.426 [2024-10-07 11:31:50.247401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.426 [2024-10-07 11:31:50.247450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.426 [2024-10-07 11:31:50.247479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.426 [2024-10-07 11:31:50.247529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.426 [2024-10-07 11:31:50.247577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.426 [2024-10-07 11:31:50.247605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.426 [2024-10-07 11:31:50.247629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.426 [2024-10-07 11:31:50.247676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.426 [2024-10-07 11:31:50.251797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.426 [2024-10-07 11:31:50.252001] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.426 [2024-10-07 11:31:50.252036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.426 [2024-10-07 11:31:50.252054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.426 [2024-10-07 11:31:50.253337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.426 [2024-10-07 11:31:50.254058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.426 [2024-10-07 11:31:50.254097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.426 [2024-10-07 11:31:50.254116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.426 [2024-10-07 11:31:50.254502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.426 [2024-10-07 11:31:50.257734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.426 [2024-10-07 11:31:50.257890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.426 [2024-10-07 11:31:50.257933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.426 [2024-10-07 11:31:50.257967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.426 [2024-10-07 11:31:50.258039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.426 [2024-10-07 11:31:50.258086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.426 [2024-10-07 11:31:50.258113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.426 [2024-10-07 11:31:50.258136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.426 [2024-10-07 11:31:50.258203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.426 [2024-10-07 11:31:50.264378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.426 [2024-10-07 11:31:50.265551] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.426 [2024-10-07 11:31:50.265620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.426 [2024-10-07 11:31:50.265655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.426 [2024-10-07 11:31:50.265889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.426 [2024-10-07 11:31:50.266042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.426 [2024-10-07 11:31:50.266088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.426 [2024-10-07 11:31:50.266121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.426 [2024-10-07 11:31:50.267709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.426 [2024-10-07 11:31:50.268963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.426 [2024-10-07 11:31:50.269090] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.426 [2024-10-07 11:31:50.269124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.426 [2024-10-07 11:31:50.269141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.426 [2024-10-07 11:31:50.269175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.426 [2024-10-07 11:31:50.269208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.426 [2024-10-07 11:31:50.269226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.426 [2024-10-07 11:31:50.269240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.426 [2024-10-07 11:31:50.270544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.426 [2024-10-07 11:31:50.274648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.426 [2024-10-07 11:31:50.274764] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.426 [2024-10-07 11:31:50.274795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.426 [2024-10-07 11:31:50.274812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.426 [2024-10-07 11:31:50.274848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.426 [2024-10-07 11:31:50.274879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.426 [2024-10-07 11:31:50.274897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.426 [2024-10-07 11:31:50.274935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.426 [2024-10-07 11:31:50.274970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.426 [2024-10-07 11:31:50.279184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.426 [2024-10-07 11:31:50.279300] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.426 [2024-10-07 11:31:50.279346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.426 [2024-10-07 11:31:50.279365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.426 [2024-10-07 11:31:50.279399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.426 [2024-10-07 11:31:50.279432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.426 [2024-10-07 11:31:50.279450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.426 [2024-10-07 11:31:50.279464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.426 [2024-10-07 11:31:50.279494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.426 [2024-10-07 11:31:50.285165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.426 [2024-10-07 11:31:50.285281] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.426 [2024-10-07 11:31:50.285313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.426 [2024-10-07 11:31:50.285349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.426 [2024-10-07 11:31:50.285382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.426 [2024-10-07 11:31:50.285414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.426 [2024-10-07 11:31:50.285432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.426 [2024-10-07 11:31:50.285447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.426 [2024-10-07 11:31:50.285477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.426 [2024-10-07 11:31:50.289275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.426 [2024-10-07 11:31:50.289400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.426 [2024-10-07 11:31:50.289433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.426 [2024-10-07 11:31:50.289450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.426 [2024-10-07 11:31:50.290633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.426 [2024-10-07 11:31:50.290873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.426 [2024-10-07 11:31:50.290900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.426 [2024-10-07 11:31:50.290915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.426 [2024-10-07 11:31:50.291655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.427 [2024-10-07 11:31:50.295432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.427 [2024-10-07 11:31:50.295564] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.427 [2024-10-07 11:31:50.295596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.427 [2024-10-07 11:31:50.295614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.427 [2024-10-07 11:31:50.295869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.427 [2024-10-07 11:31:50.296032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.427 [2024-10-07 11:31:50.296068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.427 [2024-10-07 11:31:50.296086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.427 [2024-10-07 11:31:50.296195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.427 [2024-10-07 11:31:50.299380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.427 [2024-10-07 11:31:50.299493] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.427 [2024-10-07 11:31:50.299525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.427 [2024-10-07 11:31:50.299542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.427 [2024-10-07 11:31:50.299573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.427 [2024-10-07 11:31:50.299605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.427 [2024-10-07 11:31:50.299623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.427 [2024-10-07 11:31:50.299637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.427 [2024-10-07 11:31:50.299667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.427 [2024-10-07 11:31:50.305540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.427 [2024-10-07 11:31:50.305654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.427 [2024-10-07 11:31:50.305685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.427 [2024-10-07 11:31:50.305711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.427 [2024-10-07 11:31:50.305742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.427 [2024-10-07 11:31:50.305774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.427 [2024-10-07 11:31:50.305793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.427 [2024-10-07 11:31:50.305807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.427 [2024-10-07 11:31:50.306558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.427 [2024-10-07 11:31:50.309788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.427 [2024-10-07 11:31:50.309920] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.427 [2024-10-07 11:31:50.309951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.427 [2024-10-07 11:31:50.309969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.427 [2024-10-07 11:31:50.310001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.427 [2024-10-07 11:31:50.310051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.427 [2024-10-07 11:31:50.310071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.427 [2024-10-07 11:31:50.310085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.427 [2024-10-07 11:31:50.310116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.427 [2024-10-07 11:31:50.316046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.427 [2024-10-07 11:31:50.316162] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.427 [2024-10-07 11:31:50.316194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.427 [2024-10-07 11:31:50.316210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.427 [2024-10-07 11:31:50.316242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.427 [2024-10-07 11:31:50.316274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.427 [2024-10-07 11:31:50.316292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.427 [2024-10-07 11:31:50.316307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.427 [2024-10-07 11:31:50.316353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.427 [2024-10-07 11:31:50.320592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.427 [2024-10-07 11:31:50.320706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.427 [2024-10-07 11:31:50.320737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.427 [2024-10-07 11:31:50.320754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.427 [2024-10-07 11:31:50.320794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.427 [2024-10-07 11:31:50.320825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.427 [2024-10-07 11:31:50.320843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.427 [2024-10-07 11:31:50.320857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.427 [2024-10-07 11:31:50.320888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.427 [2024-10-07 11:31:50.326558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.427 [2024-10-07 11:31:50.326672] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.427 [2024-10-07 11:31:50.326704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.427 [2024-10-07 11:31:50.326721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.427 [2024-10-07 11:31:50.326753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.427 [2024-10-07 11:31:50.326785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.427 [2024-10-07 11:31:50.326803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.427 [2024-10-07 11:31:50.326817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.427 [2024-10-07 11:31:50.326865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.427 [2024-10-07 11:31:50.330685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.427 [2024-10-07 11:31:50.330810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.427 [2024-10-07 11:31:50.330842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.427 [2024-10-07 11:31:50.330859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.427 [2024-10-07 11:31:50.332025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.427 [2024-10-07 11:31:50.332283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.427 [2024-10-07 11:31:50.332337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.427 [2024-10-07 11:31:50.332357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.427 [2024-10-07 11:31:50.333105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.427 [2024-10-07 11:31:50.336823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.427 [2024-10-07 11:31:50.336938] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.427 [2024-10-07 11:31:50.336970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.427 [2024-10-07 11:31:50.336988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.427 [2024-10-07 11:31:50.337239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.427 [2024-10-07 11:31:50.337404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.427 [2024-10-07 11:31:50.337448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.427 [2024-10-07 11:31:50.337467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.427 [2024-10-07 11:31:50.337575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.427 [2024-10-07 11:31:50.340784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.427 [2024-10-07 11:31:50.340894] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.427 [2024-10-07 11:31:50.340925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.427 [2024-10-07 11:31:50.340942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.427 [2024-10-07 11:31:50.340973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.427 [2024-10-07 11:31:50.341005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.427 [2024-10-07 11:31:50.341024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.427 [2024-10-07 11:31:50.341038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.427 [2024-10-07 11:31:50.341068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.427 [2024-10-07 11:31:50.346917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.427 [2024-10-07 11:31:50.347030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.427 [2024-10-07 11:31:50.347062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.427 [2024-10-07 11:31:50.347096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.427 [2024-10-07 11:31:50.347129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.427 [2024-10-07 11:31:50.347161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.428 [2024-10-07 11:31:50.347179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.428 [2024-10-07 11:31:50.347194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.428 [2024-10-07 11:31:50.347935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.428 [2024-10-07 11:31:50.351160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.428 [2024-10-07 11:31:50.351282] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.428 [2024-10-07 11:31:50.351327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.428 [2024-10-07 11:31:50.351348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.428 [2024-10-07 11:31:50.351381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.428 [2024-10-07 11:31:50.351413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.428 [2024-10-07 11:31:50.351431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.428 [2024-10-07 11:31:50.351445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.428 [2024-10-07 11:31:50.351475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.428 [2024-10-07 11:31:50.357384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.428 [2024-10-07 11:31:50.357498] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.428 [2024-10-07 11:31:50.357529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.428 [2024-10-07 11:31:50.357547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.428 [2024-10-07 11:31:50.357579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.428 [2024-10-07 11:31:50.357611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.428 [2024-10-07 11:31:50.357629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.428 [2024-10-07 11:31:50.357643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.428 [2024-10-07 11:31:50.357673] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.428 [2024-10-07 11:31:50.361885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.428 [2024-10-07 11:31:50.361998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.428 [2024-10-07 11:31:50.362029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.428 [2024-10-07 11:31:50.362046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.428 [2024-10-07 11:31:50.362078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.428 [2024-10-07 11:31:50.362127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.428 [2024-10-07 11:31:50.362163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.428 [2024-10-07 11:31:50.362179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.428 [2024-10-07 11:31:50.362211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.428 [2024-10-07 11:31:50.367856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.428 [2024-10-07 11:31:50.367970] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.428 [2024-10-07 11:31:50.368001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.428 [2024-10-07 11:31:50.368019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.428 [2024-10-07 11:31:50.368050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.428 [2024-10-07 11:31:50.368081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.428 [2024-10-07 11:31:50.368100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.428 [2024-10-07 11:31:50.368115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.428 [2024-10-07 11:31:50.368145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.428 [2024-10-07 11:31:50.371974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.428 [2024-10-07 11:31:50.372085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.428 [2024-10-07 11:31:50.372116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.428 [2024-10-07 11:31:50.372133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.428 [2024-10-07 11:31:50.373297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.428 [2024-10-07 11:31:50.373553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.428 [2024-10-07 11:31:50.373589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.428 [2024-10-07 11:31:50.373606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.428 [2024-10-07 11:31:50.374360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.428 [2024-10-07 11:31:50.378098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.428 [2024-10-07 11:31:50.378210] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.428 [2024-10-07 11:31:50.378242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.428 [2024-10-07 11:31:50.378260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.428 [2024-10-07 11:31:50.378546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.428 [2024-10-07 11:31:50.378696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.428 [2024-10-07 11:31:50.378723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.428 [2024-10-07 11:31:50.378737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.428 [2024-10-07 11:31:50.378844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.428 [2024-10-07 11:31:50.382066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.428 [2024-10-07 11:31:50.382178] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.428 [2024-10-07 11:31:50.382210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.428 [2024-10-07 11:31:50.382227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.428 [2024-10-07 11:31:50.382259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.428 [2024-10-07 11:31:50.382304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.428 [2024-10-07 11:31:50.382339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.428 [2024-10-07 11:31:50.382355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.428 [2024-10-07 11:31:50.382395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.428 [2024-10-07 11:31:50.388224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.428 [2024-10-07 11:31:50.388359] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.428 [2024-10-07 11:31:50.388392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.428 [2024-10-07 11:31:50.388410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.428 [2024-10-07 11:31:50.388443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.428 [2024-10-07 11:31:50.388475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.428 [2024-10-07 11:31:50.388494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.428 [2024-10-07 11:31:50.388509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.428 [2024-10-07 11:31:50.388539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.428 [2024-10-07 11:31:50.392177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.428 [2024-10-07 11:31:50.392296] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.428 [2024-10-07 11:31:50.392340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.428 [2024-10-07 11:31:50.392360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.428 [2024-10-07 11:31:50.392614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.428 [2024-10-07 11:31:50.392772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.428 [2024-10-07 11:31:50.392808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.428 [2024-10-07 11:31:50.392826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.428 [2024-10-07 11:31:50.392935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.428 [2024-10-07 11:31:50.399088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.428 [2024-10-07 11:31:50.399203] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.428 [2024-10-07 11:31:50.399235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.428 [2024-10-07 11:31:50.399253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.429 [2024-10-07 11:31:50.399311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.429 [2024-10-07 11:31:50.399361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.429 [2024-10-07 11:31:50.399381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.429 [2024-10-07 11:31:50.399395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.429 [2024-10-07 11:31:50.399425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.429 [2024-10-07 11:31:50.402266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.429 [2024-10-07 11:31:50.402405] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.429 [2024-10-07 11:31:50.402437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.429 [2024-10-07 11:31:50.402454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.429 [2024-10-07 11:31:50.402485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.429 [2024-10-07 11:31:50.402517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.429 [2024-10-07 11:31:50.402535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.429 [2024-10-07 11:31:50.402549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.429 [2024-10-07 11:31:50.402580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.429 [2024-10-07 11:31:50.409648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.429 [2024-10-07 11:31:50.409761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.429 [2024-10-07 11:31:50.409793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.429 [2024-10-07 11:31:50.409810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.429 [2024-10-07 11:31:50.409842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.429 [2024-10-07 11:31:50.409874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.429 [2024-10-07 11:31:50.409892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.429 [2024-10-07 11:31:50.409906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.429 [2024-10-07 11:31:50.409936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.429 [2024-10-07 11:31:50.412962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.429 [2024-10-07 11:31:50.413076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.429 [2024-10-07 11:31:50.413107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.429 [2024-10-07 11:31:50.413125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.429 [2024-10-07 11:31:50.413157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.429 [2024-10-07 11:31:50.413190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.429 [2024-10-07 11:31:50.413208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.429 [2024-10-07 11:31:50.413239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.429 [2024-10-07 11:31:50.413273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.429 [2024-10-07 11:31:50.419987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.429 [2024-10-07 11:31:50.420107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.429 [2024-10-07 11:31:50.420138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.429 [2024-10-07 11:31:50.420155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.429 [2024-10-07 11:31:50.420422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.429 [2024-10-07 11:31:50.420585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.429 [2024-10-07 11:31:50.420612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.429 [2024-10-07 11:31:50.420627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.429 [2024-10-07 11:31:50.420733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.429 [2024-10-07 11:31:50.423536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.429 [2024-10-07 11:31:50.423650] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.429 [2024-10-07 11:31:50.423682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.429 [2024-10-07 11:31:50.423699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.429 [2024-10-07 11:31:50.423731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.429 [2024-10-07 11:31:50.423763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.429 [2024-10-07 11:31:50.423780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.429 [2024-10-07 11:31:50.423794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.429 [2024-10-07 11:31:50.423825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.429 [2024-10-07 11:31:50.430084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.429 [2024-10-07 11:31:50.430197] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.429 [2024-10-07 11:31:50.430229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.429 [2024-10-07 11:31:50.430246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.429 [2024-10-07 11:31:50.430278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.429 [2024-10-07 11:31:50.430337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.429 [2024-10-07 11:31:50.430358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.429 [2024-10-07 11:31:50.430373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.429 [2024-10-07 11:31:50.430403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.429 [2024-10-07 11:31:50.433843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.429 [2024-10-07 11:31:50.433972] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.429 [2024-10-07 11:31:50.434004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.429 [2024-10-07 11:31:50.434021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.429 [2024-10-07 11:31:50.434272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.429 [2024-10-07 11:31:50.434448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.429 [2024-10-07 11:31:50.434484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.429 [2024-10-07 11:31:50.434500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.429 [2024-10-07 11:31:50.434607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.429 [2024-10-07 11:31:50.440700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.429 [2024-10-07 11:31:50.440813] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.429 [2024-10-07 11:31:50.440845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.429 [2024-10-07 11:31:50.440862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.429 [2024-10-07 11:31:50.440894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.429 [2024-10-07 11:31:50.440925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.429 [2024-10-07 11:31:50.440943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.429 [2024-10-07 11:31:50.440958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.429 [2024-10-07 11:31:50.440987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.429 [2024-10-07 11:31:50.443950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.429 [2024-10-07 11:31:50.444060] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.429 [2024-10-07 11:31:50.444091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.429 [2024-10-07 11:31:50.444108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.429 [2024-10-07 11:31:50.444139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.429 [2024-10-07 11:31:50.444171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.429 [2024-10-07 11:31:50.444189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.429 [2024-10-07 11:31:50.444203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.429 [2024-10-07 11:31:50.444234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.429 [2024-10-07 11:31:50.451284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.429 [2024-10-07 11:31:50.451408] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.429 [2024-10-07 11:31:50.451440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.429 [2024-10-07 11:31:50.451458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.429 [2024-10-07 11:31:50.451490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.429 [2024-10-07 11:31:50.451541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.429 [2024-10-07 11:31:50.451561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.429 [2024-10-07 11:31:50.451575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.429 [2024-10-07 11:31:50.451605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.429 [2024-10-07 11:31:50.454673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.429 [2024-10-07 11:31:50.454787] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.430 [2024-10-07 11:31:50.454818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.430 [2024-10-07 11:31:50.454836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.430 [2024-10-07 11:31:50.454867] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.430 [2024-10-07 11:31:50.454898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.430 [2024-10-07 11:31:50.454917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.430 [2024-10-07 11:31:50.454930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.430 [2024-10-07 11:31:50.454961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.430 [2024-10-07 11:31:50.461695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.430 [2024-10-07 11:31:50.461810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.430 [2024-10-07 11:31:50.461841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.430 [2024-10-07 11:31:50.461859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.430 [2024-10-07 11:31:50.462110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.430 [2024-10-07 11:31:50.462265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.430 [2024-10-07 11:31:50.462312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.430 [2024-10-07 11:31:50.462346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.430 [2024-10-07 11:31:50.462456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.430 [2024-10-07 11:31:50.465222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.430 [2024-10-07 11:31:50.465345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.430 [2024-10-07 11:31:50.465378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.430 [2024-10-07 11:31:50.465396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.430 [2024-10-07 11:31:50.465428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.430 [2024-10-07 11:31:50.465461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.430 [2024-10-07 11:31:50.465479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.430 [2024-10-07 11:31:50.465493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.430 [2024-10-07 11:31:50.465541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.430 [2024-10-07 11:31:50.471782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.430 [2024-10-07 11:31:50.471897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.430 [2024-10-07 11:31:50.471928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.430 [2024-10-07 11:31:50.471946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.430 [2024-10-07 11:31:50.471978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.430 [2024-10-07 11:31:50.472010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.430 [2024-10-07 11:31:50.472028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.430 [2024-10-07 11:31:50.472042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.430 [2024-10-07 11:31:50.472073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.430 [2024-10-07 11:31:50.475547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.430 [2024-10-07 11:31:50.475660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.430 [2024-10-07 11:31:50.475691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.430 [2024-10-07 11:31:50.475709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.430 [2024-10-07 11:31:50.475961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.430 [2024-10-07 11:31:50.476107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.430 [2024-10-07 11:31:50.476143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.430 [2024-10-07 11:31:50.476161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.430 [2024-10-07 11:31:50.476269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.430 [2024-10-07 11:31:50.482466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.430 [2024-10-07 11:31:50.482579] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.430 [2024-10-07 11:31:50.482611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.430 [2024-10-07 11:31:50.482628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.430 [2024-10-07 11:31:50.482660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.430 [2024-10-07 11:31:50.482692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.430 [2024-10-07 11:31:50.482710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.430 [2024-10-07 11:31:50.482725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.430 [2024-10-07 11:31:50.482754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.430 [2024-10-07 11:31:50.485641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.430 [2024-10-07 11:31:50.485740] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.430 [2024-10-07 11:31:50.485770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.430 [2024-10-07 11:31:50.485806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.430 [2024-10-07 11:31:50.485839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.430 [2024-10-07 11:31:50.485871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.430 [2024-10-07 11:31:50.485889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.430 [2024-10-07 11:31:50.485902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.430 [2024-10-07 11:31:50.485932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.430 [2024-10-07 11:31:50.494180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.430 [2024-10-07 11:31:50.494330] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.430 [2024-10-07 11:31:50.494364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.430 [2024-10-07 11:31:50.494382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.430 [2024-10-07 11:31:50.494425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.430 [2024-10-07 11:31:50.494458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.430 [2024-10-07 11:31:50.494476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.430 [2024-10-07 11:31:50.494491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.430 [2024-10-07 11:31:50.494522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.430 [2024-10-07 11:31:50.495719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.430 [2024-10-07 11:31:50.495814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.430 [2024-10-07 11:31:50.495844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.430 [2024-10-07 11:31:50.495862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.430 [2024-10-07 11:31:50.496693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.430 [2024-10-07 11:31:50.496892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.430 [2024-10-07 11:31:50.496918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.430 [2024-10-07 11:31:50.496933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.430 [2024-10-07 11:31:50.497018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.430 8960.21 IOPS, 35.00 MiB/s [2024-10-07T11:31:52.953Z] [2024-10-07 11:31:50.505560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.430 [2024-10-07 11:31:50.506439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.430 [2024-10-07 11:31:50.506490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.430 [2024-10-07 11:31:50.506512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.430 [2024-10-07 11:31:50.506639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.430 [2024-10-07 11:31:50.506712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.430 [2024-10-07 11:31:50.506747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.430 [2024-10-07 11:31:50.506764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.430 [2024-10-07 11:31:50.506778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.430 [2024-10-07 11:31:50.506808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.430 [2024-10-07 11:31:50.506871] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.430 [2024-10-07 11:31:50.506899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.430 [2024-10-07 11:31:50.506915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.430 [2024-10-07 11:31:50.507170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.430 [2024-10-07 11:31:50.507335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.430 [2024-10-07 11:31:50.507363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.430 [2024-10-07 11:31:50.507378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.430 [2024-10-07 11:31:50.507488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.431 [2024-10-07 11:31:50.517771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.431 [2024-10-07 11:31:50.517812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.431 [2024-10-07 11:31:50.517992] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.431 [2024-10-07 11:31:50.518026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.431 [2024-10-07 11:31:50.518044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.431 [2024-10-07 11:31:50.518094] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.431 [2024-10-07 11:31:50.518118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.431 [2024-10-07 11:31:50.518133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.431 [2024-10-07 11:31:50.518167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.431 [2024-10-07 11:31:50.518191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.431 [2024-10-07 11:31:50.518218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.431 [2024-10-07 11:31:50.518236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.431 [2024-10-07 11:31:50.518250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.431 [2024-10-07 11:31:50.518267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.431 [2024-10-07 11:31:50.518281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.431 [2024-10-07 11:31:50.518309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.431 [2024-10-07 11:31:50.518359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.431 [2024-10-07 11:31:50.518378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.431 [2024-10-07 11:31:50.527883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.431 [2024-10-07 11:31:50.527958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.431 [2024-10-07 11:31:50.528039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.431 [2024-10-07 11:31:50.528068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.431 [2024-10-07 11:31:50.528085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.431 [2024-10-07 11:31:50.529030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.431 [2024-10-07 11:31:50.529074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.431 [2024-10-07 11:31:50.529094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.431 [2024-10-07 11:31:50.529113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.431 [2024-10-07 11:31:50.529305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.431 [2024-10-07 11:31:50.529353] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.431 [2024-10-07 11:31:50.529369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.431 [2024-10-07 11:31:50.529383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.431 [2024-10-07 11:31:50.529426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.431 [2024-10-07 11:31:50.529446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.431 [2024-10-07 11:31:50.529460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.431 [2024-10-07 11:31:50.529474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.431 [2024-10-07 11:31:50.529502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.431 [2024-10-07 11:31:50.539686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.431 [2024-10-07 11:31:50.539746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.431 [2024-10-07 11:31:50.539840] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.431 [2024-10-07 11:31:50.539871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.431 [2024-10-07 11:31:50.539888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.431 [2024-10-07 11:31:50.539938] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.431 [2024-10-07 11:31:50.539961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.431 [2024-10-07 11:31:50.539977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.431 [2024-10-07 11:31:50.540010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.431 [2024-10-07 11:31:50.540033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.431 [2024-10-07 11:31:50.540060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.431 [2024-10-07 11:31:50.540077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.431 [2024-10-07 11:31:50.540114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.431 [2024-10-07 11:31:50.540132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.431 [2024-10-07 11:31:50.540146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.431 [2024-10-07 11:31:50.540160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.431 [2024-10-07 11:31:50.540198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.431 [2024-10-07 11:31:50.540222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.431 [2024-10-07 11:31:50.550020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.431 [2024-10-07 11:31:50.550071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.431 [2024-10-07 11:31:50.550164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.431 [2024-10-07 11:31:50.550195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.431 [2024-10-07 11:31:50.550212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.431 [2024-10-07 11:31:50.550263] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.431 [2024-10-07 11:31:50.550301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.431 [2024-10-07 11:31:50.550334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.431 [2024-10-07 11:31:50.550593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.431 [2024-10-07 11:31:50.550625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.431 [2024-10-07 11:31:50.550758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.431 [2024-10-07 11:31:50.550784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.431 [2024-10-07 11:31:50.550800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.431 [2024-10-07 11:31:50.550817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.431 [2024-10-07 11:31:50.550831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.431 [2024-10-07 11:31:50.550843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.431 [2024-10-07 11:31:50.550949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.431 [2024-10-07 11:31:50.550970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.431 [2024-10-07 11:31:50.560146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.431 [2024-10-07 11:31:50.560222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.431 [2024-10-07 11:31:50.560304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.431 [2024-10-07 11:31:50.560346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.431 [2024-10-07 11:31:50.560363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.431 [2024-10-07 11:31:50.560430] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.431 [2024-10-07 11:31:50.560457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.431 [2024-10-07 11:31:50.560492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.431 [2024-10-07 11:31:50.560512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.431 [2024-10-07 11:31:50.560546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.431 [2024-10-07 11:31:50.560567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.431 [2024-10-07 11:31:50.560582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.431 [2024-10-07 11:31:50.560596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.431 [2024-10-07 11:31:50.561334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.431 [2024-10-07 11:31:50.561361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.431 [2024-10-07 11:31:50.561376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.431 [2024-10-07 11:31:50.561390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.431 [2024-10-07 11:31:50.561559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.431 [2024-10-07 11:31:50.570846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.431 [2024-10-07 11:31:50.570897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.431 [2024-10-07 11:31:50.570990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.431 [2024-10-07 11:31:50.571021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.431 [2024-10-07 11:31:50.571038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.431 [2024-10-07 11:31:50.571085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.431 [2024-10-07 11:31:50.571108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.431 [2024-10-07 11:31:50.571124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.432 [2024-10-07 11:31:50.571156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.432 [2024-10-07 11:31:50.571180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.432 [2024-10-07 11:31:50.571207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.432 [2024-10-07 11:31:50.571224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.432 [2024-10-07 11:31:50.571238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.432 [2024-10-07 11:31:50.571255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.432 [2024-10-07 11:31:50.571269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.432 [2024-10-07 11:31:50.571282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.432 [2024-10-07 11:31:50.571312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.432 [2024-10-07 11:31:50.571345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.432 [2024-10-07 11:31:50.581399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.432 [2024-10-07 11:31:50.581474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.432 [2024-10-07 11:31:50.581574] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.432 [2024-10-07 11:31:50.581605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.432 [2024-10-07 11:31:50.581623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.432 [2024-10-07 11:31:50.581671] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.432 [2024-10-07 11:31:50.581694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.432 [2024-10-07 11:31:50.581709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.432 [2024-10-07 11:31:50.581748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.432 [2024-10-07 11:31:50.581771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.432 [2024-10-07 11:31:50.581799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.432 [2024-10-07 11:31:50.581817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.432 [2024-10-07 11:31:50.581831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.432 [2024-10-07 11:31:50.581848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.432 [2024-10-07 11:31:50.581862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.432 [2024-10-07 11:31:50.581876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.432 [2024-10-07 11:31:50.581905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.432 [2024-10-07 11:31:50.581922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.432 [2024-10-07 11:31:50.591841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.432 [2024-10-07 11:31:50.591907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.432 [2024-10-07 11:31:50.592009] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.432 [2024-10-07 11:31:50.592041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.432 [2024-10-07 11:31:50.592058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.432 [2024-10-07 11:31:50.592108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.432 [2024-10-07 11:31:50.592131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.432 [2024-10-07 11:31:50.592147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.432 [2024-10-07 11:31:50.592427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.432 [2024-10-07 11:31:50.592461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.432 [2024-10-07 11:31:50.592597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.432 [2024-10-07 11:31:50.592624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.432 [2024-10-07 11:31:50.592639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.432 [2024-10-07 11:31:50.592675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.432 [2024-10-07 11:31:50.592692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.432 [2024-10-07 11:31:50.592706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.432 [2024-10-07 11:31:50.592813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.432 [2024-10-07 11:31:50.592834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.432 [2024-10-07 11:31:50.601987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.432 [2024-10-07 11:31:50.602038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.432 [2024-10-07 11:31:50.602132] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.432 [2024-10-07 11:31:50.602163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.432 [2024-10-07 11:31:50.602181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.432 [2024-10-07 11:31:50.602229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.432 [2024-10-07 11:31:50.602252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.432 [2024-10-07 11:31:50.602267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.432 [2024-10-07 11:31:50.602312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.432 [2024-10-07 11:31:50.602353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.432 [2024-10-07 11:31:50.603076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.432 [2024-10-07 11:31:50.603124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.432 [2024-10-07 11:31:50.603143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.432 [2024-10-07 11:31:50.603160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.432 [2024-10-07 11:31:50.603175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.432 [2024-10-07 11:31:50.603188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.432 [2024-10-07 11:31:50.603372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.432 [2024-10-07 11:31:50.603398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.432 [2024-10-07 11:31:50.612613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.432 [2024-10-07 11:31:50.612663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.432 [2024-10-07 11:31:50.612759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.432 [2024-10-07 11:31:50.612790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.432 [2024-10-07 11:31:50.612808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.432 [2024-10-07 11:31:50.612856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.432 [2024-10-07 11:31:50.612880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.432 [2024-10-07 11:31:50.612895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.432 [2024-10-07 11:31:50.612946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.432 [2024-10-07 11:31:50.612970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.432 [2024-10-07 11:31:50.612997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.432 [2024-10-07 11:31:50.613015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.432 [2024-10-07 11:31:50.613029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.432 [2024-10-07 11:31:50.613046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.432 [2024-10-07 11:31:50.613060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.432 [2024-10-07 11:31:50.613073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.432 [2024-10-07 11:31:50.613102] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.432 [2024-10-07 11:31:50.613119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.432 [2024-10-07 11:31:50.623264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.432 [2024-10-07 11:31:50.623312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.432 [2024-10-07 11:31:50.623423] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.432 [2024-10-07 11:31:50.623454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.432 [2024-10-07 11:31:50.623471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.432 [2024-10-07 11:31:50.623519] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.433 [2024-10-07 11:31:50.623542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.433 [2024-10-07 11:31:50.623558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.433 [2024-10-07 11:31:50.623591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.433 [2024-10-07 11:31:50.623614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.433 [2024-10-07 11:31:50.623641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.433 [2024-10-07 11:31:50.623659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.433 [2024-10-07 11:31:50.623673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.433 [2024-10-07 11:31:50.623689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.433 [2024-10-07 11:31:50.623703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.433 [2024-10-07 11:31:50.623716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.433 [2024-10-07 11:31:50.623747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.433 [2024-10-07 11:31:50.623764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.433 [2024-10-07 11:31:50.633622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.433 [2024-10-07 11:31:50.633671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.433 [2024-10-07 11:31:50.633786] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.433 [2024-10-07 11:31:50.633817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.433 [2024-10-07 11:31:50.633834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.433 [2024-10-07 11:31:50.633881] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.433 [2024-10-07 11:31:50.633904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.433 [2024-10-07 11:31:50.633920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.433 [2024-10-07 11:31:50.634173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.433 [2024-10-07 11:31:50.634203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.433 [2024-10-07 11:31:50.634375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.433 [2024-10-07 11:31:50.634403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.433 [2024-10-07 11:31:50.634418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.433 [2024-10-07 11:31:50.634436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.433 [2024-10-07 11:31:50.634450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.433 [2024-10-07 11:31:50.634462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.433 [2024-10-07 11:31:50.634568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.433 [2024-10-07 11:31:50.634589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.433 [2024-10-07 11:31:50.643771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.433 [2024-10-07 11:31:50.643820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.433 [2024-10-07 11:31:50.643912] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.433 [2024-10-07 11:31:50.643942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.433 [2024-10-07 11:31:50.643959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.433 [2024-10-07 11:31:50.644008] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.433 [2024-10-07 11:31:50.644030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.433 [2024-10-07 11:31:50.644046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.433 [2024-10-07 11:31:50.644077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.433 [2024-10-07 11:31:50.644100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.433 [2024-10-07 11:31:50.644136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.433 [2024-10-07 11:31:50.644154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.433 [2024-10-07 11:31:50.644170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.433 [2024-10-07 11:31:50.644186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.433 [2024-10-07 11:31:50.644217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.433 [2024-10-07 11:31:50.644232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.433 [2024-10-07 11:31:50.644978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.433 [2024-10-07 11:31:50.645005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.433 [2024-10-07 11:31:50.654621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.433 [2024-10-07 11:31:50.654660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.433 [2024-10-07 11:31:50.654750] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.433 [2024-10-07 11:31:50.654780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.433 [2024-10-07 11:31:50.654798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.433 [2024-10-07 11:31:50.654846] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.433 [2024-10-07 11:31:50.654870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.433 [2024-10-07 11:31:50.654886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.433 [2024-10-07 11:31:50.654918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.433 [2024-10-07 11:31:50.654941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.433 [2024-10-07 11:31:50.654968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.433 [2024-10-07 11:31:50.654986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.433 [2024-10-07 11:31:50.655001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.433 [2024-10-07 11:31:50.655017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.433 [2024-10-07 11:31:50.655031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.433 [2024-10-07 11:31:50.655044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.433 [2024-10-07 11:31:50.655073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.433 [2024-10-07 11:31:50.655090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.433 [2024-10-07 11:31:50.665392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.433 [2024-10-07 11:31:50.665444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.433 [2024-10-07 11:31:50.665539] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.433 [2024-10-07 11:31:50.665570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.433 [2024-10-07 11:31:50.665588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.433 [2024-10-07 11:31:50.665637] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.433 [2024-10-07 11:31:50.665661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.433 [2024-10-07 11:31:50.665687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.433 [2024-10-07 11:31:50.665718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.433 [2024-10-07 11:31:50.665761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.433 [2024-10-07 11:31:50.665790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.433 [2024-10-07 11:31:50.665809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.433 [2024-10-07 11:31:50.665823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.433 [2024-10-07 11:31:50.665839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.433 [2024-10-07 11:31:50.665853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.433 [2024-10-07 11:31:50.665867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.433 [2024-10-07 11:31:50.665897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.433 [2024-10-07 11:31:50.665914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.433 [2024-10-07 11:31:50.675888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.433 [2024-10-07 11:31:50.675945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.433 [2024-10-07 11:31:50.676038] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.433 [2024-10-07 11:31:50.676070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.433 [2024-10-07 11:31:50.676087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.433 [2024-10-07 11:31:50.676135] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.433 [2024-10-07 11:31:50.676159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.433 [2024-10-07 11:31:50.676174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.433 [2024-10-07 11:31:50.676444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.433 [2024-10-07 11:31:50.676476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.433 [2024-10-07 11:31:50.676619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.433 [2024-10-07 11:31:50.676645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.433 [2024-10-07 11:31:50.676660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.434 [2024-10-07 11:31:50.676676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.434 [2024-10-07 11:31:50.676690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.434 [2024-10-07 11:31:50.676704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.434 [2024-10-07 11:31:50.676809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.434 [2024-10-07 11:31:50.676829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.434 [2024-10-07 11:31:50.686039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.434 [2024-10-07 11:31:50.686116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.434 [2024-10-07 11:31:50.686229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.434 [2024-10-07 11:31:50.686304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.434 [2024-10-07 11:31:50.686341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.434 [2024-10-07 11:31:50.686397] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.434 [2024-10-07 11:31:50.686421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.434 [2024-10-07 11:31:50.686442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.434 [2024-10-07 11:31:50.686477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.434 [2024-10-07 11:31:50.686502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.434 [2024-10-07 11:31:50.687237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.434 [2024-10-07 11:31:50.687277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.434 [2024-10-07 11:31:50.687296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.434 [2024-10-07 11:31:50.687314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.434 [2024-10-07 11:31:50.687346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.434 [2024-10-07 11:31:50.687360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.434 [2024-10-07 11:31:50.687539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.434 [2024-10-07 11:31:50.687564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.434 [2024-10-07 11:31:50.697008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.434 [2024-10-07 11:31:50.697057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.434 [2024-10-07 11:31:50.697157] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.434 [2024-10-07 11:31:50.697188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.434 [2024-10-07 11:31:50.697205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.434 [2024-10-07 11:31:50.697255] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.434 [2024-10-07 11:31:50.697278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.434 [2024-10-07 11:31:50.697294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.434 [2024-10-07 11:31:50.697343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.434 [2024-10-07 11:31:50.697370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.434 [2024-10-07 11:31:50.697398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.434 [2024-10-07 11:31:50.697416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.434 [2024-10-07 11:31:50.697430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.434 [2024-10-07 11:31:50.697447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.434 [2024-10-07 11:31:50.697462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.434 [2024-10-07 11:31:50.697493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.434 [2024-10-07 11:31:50.697526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.434 [2024-10-07 11:31:50.697543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.434 [2024-10-07 11:31:50.707595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.434 [2024-10-07 11:31:50.707645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.434 [2024-10-07 11:31:50.707738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.434 [2024-10-07 11:31:50.707769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.434 [2024-10-07 11:31:50.707786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.434 [2024-10-07 11:31:50.707834] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.434 [2024-10-07 11:31:50.707864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.434 [2024-10-07 11:31:50.707879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.434 [2024-10-07 11:31:50.707911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.434 [2024-10-07 11:31:50.707934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.434 [2024-10-07 11:31:50.707961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.434 [2024-10-07 11:31:50.707978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.434 [2024-10-07 11:31:50.707993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.434 [2024-10-07 11:31:50.708008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.434 [2024-10-07 11:31:50.708022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.434 [2024-10-07 11:31:50.708036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.434 [2024-10-07 11:31:50.708066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.434 [2024-10-07 11:31:50.708082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.434 [2024-10-07 11:31:50.717985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.434 [2024-10-07 11:31:50.718035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.434 [2024-10-07 11:31:50.718128] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.434 [2024-10-07 11:31:50.718158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.434 [2024-10-07 11:31:50.718175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.434 [2024-10-07 11:31:50.718223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.434 [2024-10-07 11:31:50.718246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.434 [2024-10-07 11:31:50.718261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.434 [2024-10-07 11:31:50.718562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.434 [2024-10-07 11:31:50.718596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.434 [2024-10-07 11:31:50.718748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.434 [2024-10-07 11:31:50.718774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.434 [2024-10-07 11:31:50.718789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.434 [2024-10-07 11:31:50.718807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.434 [2024-10-07 11:31:50.718821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.434 [2024-10-07 11:31:50.718834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.434 [2024-10-07 11:31:50.718940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.434 [2024-10-07 11:31:50.718961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.434 [2024-10-07 11:31:50.728169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.434 [2024-10-07 11:31:50.728250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.434 [2024-10-07 11:31:50.728371] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.434 [2024-10-07 11:31:50.728404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.434 [2024-10-07 11:31:50.728421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.434 [2024-10-07 11:31:50.728470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.434 [2024-10-07 11:31:50.728494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.434 [2024-10-07 11:31:50.728510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.434 [2024-10-07 11:31:50.728543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.434 [2024-10-07 11:31:50.728568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.434 [2024-10-07 11:31:50.728595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.434 [2024-10-07 11:31:50.728613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.434 [2024-10-07 11:31:50.728629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.434 [2024-10-07 11:31:50.728645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.434 [2024-10-07 11:31:50.728660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.434 [2024-10-07 11:31:50.728675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.434 [2024-10-07 11:31:50.729440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.434 [2024-10-07 11:31:50.729468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.434 [2024-10-07 11:31:50.739003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.434 [2024-10-07 11:31:50.739048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.434 [2024-10-07 11:31:50.739140] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.435 [2024-10-07 11:31:50.739170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.435 [2024-10-07 11:31:50.739214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.435 [2024-10-07 11:31:50.739270] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.435 [2024-10-07 11:31:50.739294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.435 [2024-10-07 11:31:50.739310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.435 [2024-10-07 11:31:50.739364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.435 [2024-10-07 11:31:50.739388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.435 [2024-10-07 11:31:50.739416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.435 [2024-10-07 11:31:50.739434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.435 [2024-10-07 11:31:50.739449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.435 [2024-10-07 11:31:50.739465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.435 [2024-10-07 11:31:50.739479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.435 [2024-10-07 11:31:50.739493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.435 [2024-10-07 11:31:50.739522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.435 [2024-10-07 11:31:50.739539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.435 [2024-10-07 11:31:50.749629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.435 [2024-10-07 11:31:50.749679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.435 [2024-10-07 11:31:50.749773] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.435 [2024-10-07 11:31:50.749804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.435 [2024-10-07 11:31:50.749821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.435 [2024-10-07 11:31:50.749870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.435 [2024-10-07 11:31:50.749893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.435 [2024-10-07 11:31:50.749909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.435 [2024-10-07 11:31:50.749941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.435 [2024-10-07 11:31:50.749964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.435 [2024-10-07 11:31:50.749991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.435 [2024-10-07 11:31:50.750009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.435 [2024-10-07 11:31:50.750023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.435 [2024-10-07 11:31:50.750039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.435 [2024-10-07 11:31:50.750054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.435 [2024-10-07 11:31:50.750067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.435 [2024-10-07 11:31:50.750112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.435 [2024-10-07 11:31:50.750131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.435 [2024-10-07 11:31:50.759978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.435 [2024-10-07 11:31:50.760031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.435 [2024-10-07 11:31:50.760126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.435 [2024-10-07 11:31:50.760157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.435 [2024-10-07 11:31:50.760175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.435 [2024-10-07 11:31:50.760225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.435 [2024-10-07 11:31:50.760248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.435 [2024-10-07 11:31:50.760264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.435 [2024-10-07 11:31:50.760535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.435 [2024-10-07 11:31:50.760567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.435 [2024-10-07 11:31:50.760704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.435 [2024-10-07 11:31:50.760729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.435 [2024-10-07 11:31:50.760744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.435 [2024-10-07 11:31:50.760761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.435 [2024-10-07 11:31:50.760775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.435 [2024-10-07 11:31:50.760788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.435 [2024-10-07 11:31:50.760893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.435 [2024-10-07 11:31:50.760913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.435 [2024-10-07 11:31:50.770111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.435 [2024-10-07 11:31:50.770162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.435 [2024-10-07 11:31:50.770254] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.435 [2024-10-07 11:31:50.770296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.435 [2024-10-07 11:31:50.770331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.435 [2024-10-07 11:31:50.770387] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.435 [2024-10-07 11:31:50.770411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.435 [2024-10-07 11:31:50.770427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.435 [2024-10-07 11:31:50.770460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.435 [2024-10-07 11:31:50.770484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.435 [2024-10-07 11:31:50.770511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.435 [2024-10-07 11:31:50.770554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.435 [2024-10-07 11:31:50.770569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.435 [2024-10-07 11:31:50.770587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.435 [2024-10-07 11:31:50.770601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.435 [2024-10-07 11:31:50.770614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.435 [2024-10-07 11:31:50.771356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.435 [2024-10-07 11:31:50.771383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.435 [2024-10-07 11:31:50.780774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.435 [2024-10-07 11:31:50.780824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.435 [2024-10-07 11:31:50.780917] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.435 [2024-10-07 11:31:50.780948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.435 [2024-10-07 11:31:50.780966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.435 [2024-10-07 11:31:50.781016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.435 [2024-10-07 11:31:50.781039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.435 [2024-10-07 11:31:50.781054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.435 [2024-10-07 11:31:50.781086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.435 [2024-10-07 11:31:50.781109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.435 [2024-10-07 11:31:50.781136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.435 [2024-10-07 11:31:50.781154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.435 [2024-10-07 11:31:50.781168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.435 [2024-10-07 11:31:50.781184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.435 [2024-10-07 11:31:50.781198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.435 [2024-10-07 11:31:50.781211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.435 [2024-10-07 11:31:50.781241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.435 [2024-10-07 11:31:50.781258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.435 [2024-10-07 11:31:50.791356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.435 [2024-10-07 11:31:50.791404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.435 [2024-10-07 11:31:50.791497] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.435 [2024-10-07 11:31:50.791528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.435 [2024-10-07 11:31:50.791545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.435 [2024-10-07 11:31:50.791604] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.435 [2024-10-07 11:31:50.791635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.435 [2024-10-07 11:31:50.791652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.435 [2024-10-07 11:31:50.791684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.435 [2024-10-07 11:31:50.791708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.435 [2024-10-07 11:31:50.791735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.436 [2024-10-07 11:31:50.791752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.436 [2024-10-07 11:31:50.791767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.436 [2024-10-07 11:31:50.791783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.436 [2024-10-07 11:31:50.791797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.436 [2024-10-07 11:31:50.791810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.436 [2024-10-07 11:31:50.791839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.436 [2024-10-07 11:31:50.791857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.436 [2024-10-07 11:31:50.801774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.436 [2024-10-07 11:31:50.801826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.436 [2024-10-07 11:31:50.801919] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.436 [2024-10-07 11:31:50.801950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.436 [2024-10-07 11:31:50.801967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.436 [2024-10-07 11:31:50.802016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.436 [2024-10-07 11:31:50.802039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.436 [2024-10-07 11:31:50.802055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.436 [2024-10-07 11:31:50.802334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.436 [2024-10-07 11:31:50.802367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.436 [2024-10-07 11:31:50.802504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.436 [2024-10-07 11:31:50.802530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.436 [2024-10-07 11:31:50.802546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.436 [2024-10-07 11:31:50.802563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.436 [2024-10-07 11:31:50.802578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.436 [2024-10-07 11:31:50.802591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.436 [2024-10-07 11:31:50.802696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.436 [2024-10-07 11:31:50.802737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.436 [2024-10-07 11:31:50.811900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.436 [2024-10-07 11:31:50.811974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.436 [2024-10-07 11:31:50.812054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.436 [2024-10-07 11:31:50.812083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.436 [2024-10-07 11:31:50.812100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.436 [2024-10-07 11:31:50.812164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.436 [2024-10-07 11:31:50.812192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.436 [2024-10-07 11:31:50.812208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.436 [2024-10-07 11:31:50.812226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.436 [2024-10-07 11:31:50.812974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.436 [2024-10-07 11:31:50.813017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.436 [2024-10-07 11:31:50.813035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.436 [2024-10-07 11:31:50.813049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.436 [2024-10-07 11:31:50.813221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.436 [2024-10-07 11:31:50.813246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.436 [2024-10-07 11:31:50.813261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.436 [2024-10-07 11:31:50.813275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.436 [2024-10-07 11:31:50.813379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.436 [2024-10-07 11:31:50.822565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.436 [2024-10-07 11:31:50.822619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.436 [2024-10-07 11:31:50.822711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.436 [2024-10-07 11:31:50.822742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.436 [2024-10-07 11:31:50.822760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.436 [2024-10-07 11:31:50.822808] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.436 [2024-10-07 11:31:50.822831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.436 [2024-10-07 11:31:50.822847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.436 [2024-10-07 11:31:50.822881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.436 [2024-10-07 11:31:50.822905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.436 [2024-10-07 11:31:50.822932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.436 [2024-10-07 11:31:50.822950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.436 [2024-10-07 11:31:50.822983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.436 [2024-10-07 11:31:50.823001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.436 [2024-10-07 11:31:50.823015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.436 [2024-10-07 11:31:50.823029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.436 [2024-10-07 11:31:50.823060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.436 [2024-10-07 11:31:50.823077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.436 [2024-10-07 11:31:50.833134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.436 [2024-10-07 11:31:50.833175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.436 [2024-10-07 11:31:50.833265] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.436 [2024-10-07 11:31:50.833296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.436 [2024-10-07 11:31:50.833313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.436 [2024-10-07 11:31:50.833381] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.436 [2024-10-07 11:31:50.833405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.436 [2024-10-07 11:31:50.833421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.436 [2024-10-07 11:31:50.833454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.436 [2024-10-07 11:31:50.833478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.436 [2024-10-07 11:31:50.833505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.436 [2024-10-07 11:31:50.833523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.436 [2024-10-07 11:31:50.833538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.436 [2024-10-07 11:31:50.833554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.436 [2024-10-07 11:31:50.833568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.436 [2024-10-07 11:31:50.833583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.436 [2024-10-07 11:31:50.833613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.436 [2024-10-07 11:31:50.833630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.436 [2024-10-07 11:31:50.843593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.436 [2024-10-07 11:31:50.843646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.436 [2024-10-07 11:31:50.843739] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.436 [2024-10-07 11:31:50.843770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.436 [2024-10-07 11:31:50.843788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.436 [2024-10-07 11:31:50.843835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.436 [2024-10-07 11:31:50.843858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.436 [2024-10-07 11:31:50.843891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.436 [2024-10-07 11:31:50.844147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.436 [2024-10-07 11:31:50.844178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.436 [2024-10-07 11:31:50.844337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.436 [2024-10-07 11:31:50.844365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.436 [2024-10-07 11:31:50.844380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.436 [2024-10-07 11:31:50.844397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.436 [2024-10-07 11:31:50.844411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.436 [2024-10-07 11:31:50.844424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.436 [2024-10-07 11:31:50.844531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.436 [2024-10-07 11:31:50.844551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.436 [2024-10-07 11:31:50.853719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.437 [2024-10-07 11:31:50.853793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.437 [2024-10-07 11:31:50.853873] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.437 [2024-10-07 11:31:50.853902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.437 [2024-10-07 11:31:50.853919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.437 [2024-10-07 11:31:50.853983] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.437 [2024-10-07 11:31:50.854010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.437 [2024-10-07 11:31:50.854026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.437 [2024-10-07 11:31:50.854045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.437 [2024-10-07 11:31:50.854077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.437 [2024-10-07 11:31:50.854097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.437 [2024-10-07 11:31:50.854112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.437 [2024-10-07 11:31:50.854125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.437 [2024-10-07 11:31:50.854878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.437 [2024-10-07 11:31:50.854907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.437 [2024-10-07 11:31:50.854922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.437 [2024-10-07 11:31:50.854936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.437 [2024-10-07 11:31:50.855106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.437 [2024-10-07 11:31:50.864399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.437 [2024-10-07 11:31:50.864466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.437 [2024-10-07 11:31:50.864562] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.437 [2024-10-07 11:31:50.864594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.437 [2024-10-07 11:31:50.864611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.437 [2024-10-07 11:31:50.864660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.437 [2024-10-07 11:31:50.864683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.437 [2024-10-07 11:31:50.864699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.437 [2024-10-07 11:31:50.864731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.437 [2024-10-07 11:31:50.864754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.437 [2024-10-07 11:31:50.864781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.437 [2024-10-07 11:31:50.864799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.437 [2024-10-07 11:31:50.864813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.437 [2024-10-07 11:31:50.864830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.437 [2024-10-07 11:31:50.864844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.437 [2024-10-07 11:31:50.864857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.437 [2024-10-07 11:31:50.864887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.437 [2024-10-07 11:31:50.864904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.437 [2024-10-07 11:31:50.875078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.437 [2024-10-07 11:31:50.875128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.437 [2024-10-07 11:31:50.875220] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.437 [2024-10-07 11:31:50.875251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.437 [2024-10-07 11:31:50.875268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.437 [2024-10-07 11:31:50.875330] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.437 [2024-10-07 11:31:50.875356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.437 [2024-10-07 11:31:50.875372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.437 [2024-10-07 11:31:50.875405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.437 [2024-10-07 11:31:50.875429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.437 [2024-10-07 11:31:50.875456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.437 [2024-10-07 11:31:50.875474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.437 [2024-10-07 11:31:50.875488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.437 [2024-10-07 11:31:50.875520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.437 [2024-10-07 11:31:50.875536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.437 [2024-10-07 11:31:50.875549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.437 [2024-10-07 11:31:50.875581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.437 [2024-10-07 11:31:50.875599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.437 [2024-10-07 11:31:50.885464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.437 [2024-10-07 11:31:50.885514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.437 [2024-10-07 11:31:50.885606] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.437 [2024-10-07 11:31:50.885637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.437 [2024-10-07 11:31:50.885654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.437 [2024-10-07 11:31:50.885701] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.437 [2024-10-07 11:31:50.885724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.437 [2024-10-07 11:31:50.885740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.437 [2024-10-07 11:31:50.885992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.437 [2024-10-07 11:31:50.886023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.437 [2024-10-07 11:31:50.886174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.437 [2024-10-07 11:31:50.886199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.437 [2024-10-07 11:31:50.886214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.437 [2024-10-07 11:31:50.886231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.437 [2024-10-07 11:31:50.886246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.437 [2024-10-07 11:31:50.886259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.437 [2024-10-07 11:31:50.886391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.437 [2024-10-07 11:31:50.886415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.437 [2024-10-07 11:31:50.895635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.437 [2024-10-07 11:31:50.895683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.437 [2024-10-07 11:31:50.895775] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.437 [2024-10-07 11:31:50.895806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.437 [2024-10-07 11:31:50.895824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.437 [2024-10-07 11:31:50.895872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.437 [2024-10-07 11:31:50.895895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.437 [2024-10-07 11:31:50.895910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.437 [2024-10-07 11:31:50.895960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.437 [2024-10-07 11:31:50.895983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.437 [2024-10-07 11:31:50.896010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.437 [2024-10-07 11:31:50.896028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.437 [2024-10-07 11:31:50.896042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.437 [2024-10-07 11:31:50.896059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.438 [2024-10-07 11:31:50.896073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.438 [2024-10-07 11:31:50.896086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.438 [2024-10-07 11:31:50.896827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.438 [2024-10-07 11:31:50.896855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.438 [2024-10-07 11:31:50.906398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.438 [2024-10-07 11:31:50.906447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.438 [2024-10-07 11:31:50.906540] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.438 [2024-10-07 11:31:50.906571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.438 [2024-10-07 11:31:50.906588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.438 [2024-10-07 11:31:50.906635] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.438 [2024-10-07 11:31:50.906659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.438 [2024-10-07 11:31:50.906674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.438 [2024-10-07 11:31:50.906706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.438 [2024-10-07 11:31:50.906729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.438 [2024-10-07 11:31:50.906756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.438 [2024-10-07 11:31:50.906774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.438 [2024-10-07 11:31:50.906788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.438 [2024-10-07 11:31:50.906804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.438 [2024-10-07 11:31:50.906818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.438 [2024-10-07 11:31:50.906831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.438 [2024-10-07 11:31:50.906860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.438 [2024-10-07 11:31:50.906878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.438 [2024-10-07 11:31:50.916987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.438 [2024-10-07 11:31:50.917038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.438 [2024-10-07 11:31:50.917159] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.438 [2024-10-07 11:31:50.917190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.438 [2024-10-07 11:31:50.917208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.438 [2024-10-07 11:31:50.917256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.438 [2024-10-07 11:31:50.917279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.438 [2024-10-07 11:31:50.917295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.438 [2024-10-07 11:31:50.917341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.438 [2024-10-07 11:31:50.917367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.438 [2024-10-07 11:31:50.917394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.438 [2024-10-07 11:31:50.917412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.438 [2024-10-07 11:31:50.917427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.438 [2024-10-07 11:31:50.917444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.438 [2024-10-07 11:31:50.917458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.438 [2024-10-07 11:31:50.917471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.438 [2024-10-07 11:31:50.917501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.438 [2024-10-07 11:31:50.917518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.438 [2024-10-07 11:31:50.927504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.438 [2024-10-07 11:31:50.927554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.438 [2024-10-07 11:31:50.927647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.438 [2024-10-07 11:31:50.927678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.438 [2024-10-07 11:31:50.927694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.438 [2024-10-07 11:31:50.927742] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.438 [2024-10-07 11:31:50.927766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.438 [2024-10-07 11:31:50.927781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.438 [2024-10-07 11:31:50.928036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.438 [2024-10-07 11:31:50.928068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.438 [2024-10-07 11:31:50.928202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.438 [2024-10-07 11:31:50.928227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.438 [2024-10-07 11:31:50.928242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.438 [2024-10-07 11:31:50.928259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.438 [2024-10-07 11:31:50.928290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.438 [2024-10-07 11:31:50.928305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.438 [2024-10-07 11:31:50.928429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.438 [2024-10-07 11:31:50.928451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.438 [2024-10-07 11:31:50.937627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.438 [2024-10-07 11:31:50.937700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.438 [2024-10-07 11:31:50.937779] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.438 [2024-10-07 11:31:50.937808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.438 [2024-10-07 11:31:50.937824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.438 [2024-10-07 11:31:50.937888] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.438 [2024-10-07 11:31:50.937915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.438 [2024-10-07 11:31:50.937931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.438 [2024-10-07 11:31:50.937949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.438 [2024-10-07 11:31:50.938702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.438 [2024-10-07 11:31:50.938745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.438 [2024-10-07 11:31:50.938763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.438 [2024-10-07 11:31:50.938777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.438 [2024-10-07 11:31:50.938949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.438 [2024-10-07 11:31:50.938975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.438 [2024-10-07 11:31:50.938989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.438 [2024-10-07 11:31:50.939003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.438 [2024-10-07 11:31:50.939111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.438 [2024-10-07 11:31:50.948214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.438 [2024-10-07 11:31:50.948263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.438 [2024-10-07 11:31:50.948371] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.438 [2024-10-07 11:31:50.948402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.438 [2024-10-07 11:31:50.948420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.438 [2024-10-07 11:31:50.948468] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.438 [2024-10-07 11:31:50.948490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.438 [2024-10-07 11:31:50.948506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.438 [2024-10-07 11:31:50.948538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.438 [2024-10-07 11:31:50.948580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.438 [2024-10-07 11:31:50.948609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.438 [2024-10-07 11:31:50.948628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.438 [2024-10-07 11:31:50.948642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.438 [2024-10-07 11:31:50.948658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.438 [2024-10-07 11:31:50.948672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.438 [2024-10-07 11:31:50.948685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.438 [2024-10-07 11:31:50.948716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.438 [2024-10-07 11:31:50.948739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.438 [2024-10-07 11:31:50.958752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.438 [2024-10-07 11:31:50.958802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.438 [2024-10-07 11:31:50.958895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.438 [2024-10-07 11:31:50.958925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.438 [2024-10-07 11:31:50.958943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.439 [2024-10-07 11:31:50.958990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.439 [2024-10-07 11:31:50.959013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.439 [2024-10-07 11:31:50.959029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.439 [2024-10-07 11:31:50.959060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.439 [2024-10-07 11:31:50.959084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.439 [2024-10-07 11:31:50.959110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.439 [2024-10-07 11:31:50.959128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.439 [2024-10-07 11:31:50.959142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.439 [2024-10-07 11:31:50.959158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.439 [2024-10-07 11:31:50.959172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.439 [2024-10-07 11:31:50.959185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.439 [2024-10-07 11:31:50.959215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.439 [2024-10-07 11:31:50.959232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.439 [2024-10-07 11:31:50.969058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.439 [2024-10-07 11:31:50.969109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.439 [2024-10-07 11:31:50.969217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.439 [2024-10-07 11:31:50.969264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.439 [2024-10-07 11:31:50.969282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.439 [2024-10-07 11:31:50.969369] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.439 [2024-10-07 11:31:50.969396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.439 [2024-10-07 11:31:50.969411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.439 [2024-10-07 11:31:50.969665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.439 [2024-10-07 11:31:50.969697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.439 [2024-10-07 11:31:50.969834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.439 [2024-10-07 11:31:50.969860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.439 [2024-10-07 11:31:50.969875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.439 [2024-10-07 11:31:50.969891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.439 [2024-10-07 11:31:50.969905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.439 [2024-10-07 11:31:50.969918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.439 [2024-10-07 11:31:50.970024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.439 [2024-10-07 11:31:50.970045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.439 [2024-10-07 11:31:50.979181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.439 [2024-10-07 11:31:50.979254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.439 [2024-10-07 11:31:50.979345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.439 [2024-10-07 11:31:50.979375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.439 [2024-10-07 11:31:50.979392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.439 [2024-10-07 11:31:50.979458] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.439 [2024-10-07 11:31:50.979485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.439 [2024-10-07 11:31:50.979501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.439 [2024-10-07 11:31:50.979519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.439 [2024-10-07 11:31:50.979550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.439 [2024-10-07 11:31:50.979572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.439 [2024-10-07 11:31:50.979586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.439 [2024-10-07 11:31:50.979599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.439 [2024-10-07 11:31:50.980338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.439 [2024-10-07 11:31:50.980365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.439 [2024-10-07 11:31:50.980398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.439 [2024-10-07 11:31:50.980413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.439 [2024-10-07 11:31:50.980584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.439 [2024-10-07 11:31:50.989794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.439 [2024-10-07 11:31:50.989845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.439 [2024-10-07 11:31:50.989937] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.439 [2024-10-07 11:31:50.989967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.439 [2024-10-07 11:31:50.989985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.439 [2024-10-07 11:31:50.990033] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.439 [2024-10-07 11:31:50.990056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.439 [2024-10-07 11:31:50.990072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.439 [2024-10-07 11:31:50.990104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.439 [2024-10-07 11:31:50.990126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.439 [2024-10-07 11:31:50.990153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.439 [2024-10-07 11:31:50.990171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.439 [2024-10-07 11:31:50.990185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.439 [2024-10-07 11:31:50.990202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.439 [2024-10-07 11:31:50.990217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.439 [2024-10-07 11:31:50.990230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.439 [2024-10-07 11:31:50.990259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.439 [2024-10-07 11:31:50.990277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.439 [2024-10-07 11:31:51.000247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.439 [2024-10-07 11:31:51.000297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.439 [2024-10-07 11:31:51.000403] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.439 [2024-10-07 11:31:51.000435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.439 [2024-10-07 11:31:51.000452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.439 [2024-10-07 11:31:51.000499] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.439 [2024-10-07 11:31:51.000522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.439 [2024-10-07 11:31:51.000538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.439 [2024-10-07 11:31:51.000569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.439 [2024-10-07 11:31:51.000592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.439 [2024-10-07 11:31:51.000640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.439 [2024-10-07 11:31:51.000659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.439 [2024-10-07 11:31:51.000673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.439 [2024-10-07 11:31:51.000690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.439 [2024-10-07 11:31:51.000704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.439 [2024-10-07 11:31:51.000717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.439 [2024-10-07 11:31:51.000748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.439 [2024-10-07 11:31:51.000765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.439 [2024-10-07 11:31:51.010502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.439 [2024-10-07 11:31:51.010552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.439 [2024-10-07 11:31:51.010644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.439 [2024-10-07 11:31:51.010675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.439 [2024-10-07 11:31:51.010692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.439 [2024-10-07 11:31:51.010740] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.439 [2024-10-07 11:31:51.010762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.439 [2024-10-07 11:31:51.010778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.439 [2024-10-07 11:31:51.011030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.439 [2024-10-07 11:31:51.011061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.439 [2024-10-07 11:31:51.011196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.439 [2024-10-07 11:31:51.011222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.439 [2024-10-07 11:31:51.011237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.440 [2024-10-07 11:31:51.011254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.440 [2024-10-07 11:31:51.011269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.440 [2024-10-07 11:31:51.011282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.440 [2024-10-07 11:31:51.011401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.440 [2024-10-07 11:31:51.011424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.440 [2024-10-07 11:31:51.020623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.440 [2024-10-07 11:31:51.020698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.440 [2024-10-07 11:31:51.020778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.440 [2024-10-07 11:31:51.020806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.440 [2024-10-07 11:31:51.020842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.440 [2024-10-07 11:31:51.020913] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.440 [2024-10-07 11:31:51.020940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.440 [2024-10-07 11:31:51.020956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.440 [2024-10-07 11:31:51.020974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.440 [2024-10-07 11:31:51.021719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.440 [2024-10-07 11:31:51.021762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.440 [2024-10-07 11:31:51.021780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.440 [2024-10-07 11:31:51.021794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.440 [2024-10-07 11:31:51.021966] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.440 [2024-10-07 11:31:51.021991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.440 [2024-10-07 11:31:51.022005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.440 [2024-10-07 11:31:51.022019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.440 [2024-10-07 11:31:51.022109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.440 [2024-10-07 11:31:51.031117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.440 [2024-10-07 11:31:51.031168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.440 [2024-10-07 11:31:51.031260] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.440 [2024-10-07 11:31:51.031291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.440 [2024-10-07 11:31:51.031308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.440 [2024-10-07 11:31:51.031373] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.440 [2024-10-07 11:31:51.031397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.440 [2024-10-07 11:31:51.031412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.440 [2024-10-07 11:31:51.031444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.440 [2024-10-07 11:31:51.031467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.440 [2024-10-07 11:31:51.031494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.440 [2024-10-07 11:31:51.031511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.440 [2024-10-07 11:31:51.031526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.440 [2024-10-07 11:31:51.031542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.440 [2024-10-07 11:31:51.031556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.440 [2024-10-07 11:31:51.031569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.440 [2024-10-07 11:31:51.031614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.440 [2024-10-07 11:31:51.031633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.440 [2024-10-07 11:31:51.041574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.440 [2024-10-07 11:31:51.041623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.440 [2024-10-07 11:31:51.041715] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.440 [2024-10-07 11:31:51.041745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.440 [2024-10-07 11:31:51.041762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.440 [2024-10-07 11:31:51.041810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.440 [2024-10-07 11:31:51.041833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.440 [2024-10-07 11:31:51.041848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.440 [2024-10-07 11:31:51.041880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.440 [2024-10-07 11:31:51.041903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.440 [2024-10-07 11:31:51.041930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.440 [2024-10-07 11:31:51.041948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.440 [2024-10-07 11:31:51.041964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.440 [2024-10-07 11:31:51.041980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.440 [2024-10-07 11:31:51.041994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.440 [2024-10-07 11:31:51.042007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.440 [2024-10-07 11:31:51.042037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.440 [2024-10-07 11:31:51.042054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.440 [2024-10-07 11:31:51.051800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.440 [2024-10-07 11:31:51.051850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.440 [2024-10-07 11:31:51.051942] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.440 [2024-10-07 11:31:51.051973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.440 [2024-10-07 11:31:51.051989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.440 [2024-10-07 11:31:51.052037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.440 [2024-10-07 11:31:51.052061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.440 [2024-10-07 11:31:51.052076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.440 [2024-10-07 11:31:51.052342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.440 [2024-10-07 11:31:51.052375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.440 [2024-10-07 11:31:51.052512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.440 [2024-10-07 11:31:51.052554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.440 [2024-10-07 11:31:51.052569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.440 [2024-10-07 11:31:51.052587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.440 [2024-10-07 11:31:51.052602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.440 [2024-10-07 11:31:51.052615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.440 [2024-10-07 11:31:51.052721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.440 [2024-10-07 11:31:51.052742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.440 [2024-10-07 11:31:51.061921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.440 [2024-10-07 11:31:51.062010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.440 [2024-10-07 11:31:51.062088] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.440 [2024-10-07 11:31:51.062117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.440 [2024-10-07 11:31:51.062134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.440 [2024-10-07 11:31:51.062198] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.440 [2024-10-07 11:31:51.062225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.440 [2024-10-07 11:31:51.062241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.440 [2024-10-07 11:31:51.062260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.440 [2024-10-07 11:31:51.063018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.440 [2024-10-07 11:31:51.063063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.440 [2024-10-07 11:31:51.063082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.440 [2024-10-07 11:31:51.063097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.440 [2024-10-07 11:31:51.063287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.440 [2024-10-07 11:31:51.063313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.440 [2024-10-07 11:31:51.063344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.440 [2024-10-07 11:31:51.063359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.440 [2024-10-07 11:31:51.063450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.440 [2024-10-07 11:31:51.072429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.440 [2024-10-07 11:31:51.072479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.440 [2024-10-07 11:31:51.072573] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.441 [2024-10-07 11:31:51.072604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.441 [2024-10-07 11:31:51.072622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.441 [2024-10-07 11:31:51.072694] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.441 [2024-10-07 11:31:51.072719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.441 [2024-10-07 11:31:51.072735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.441 [2024-10-07 11:31:51.072769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.441 [2024-10-07 11:31:51.072792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.441 [2024-10-07 11:31:51.072819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.441 [2024-10-07 11:31:51.072837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.441 [2024-10-07 11:31:51.072852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.441 [2024-10-07 11:31:51.072869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.441 [2024-10-07 11:31:51.072883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.441 [2024-10-07 11:31:51.072897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.441 [2024-10-07 11:31:51.072927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.441 [2024-10-07 11:31:51.072944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.441 [2024-10-07 11:31:51.082916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.441 [2024-10-07 11:31:51.082966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.441 [2024-10-07 11:31:51.083058] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.441 [2024-10-07 11:31:51.083088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.441 [2024-10-07 11:31:51.083106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.441 [2024-10-07 11:31:51.083154] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.441 [2024-10-07 11:31:51.083177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.441 [2024-10-07 11:31:51.083192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.441 [2024-10-07 11:31:51.083224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.441 [2024-10-07 11:31:51.083248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.441 [2024-10-07 11:31:51.083274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.441 [2024-10-07 11:31:51.083292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.441 [2024-10-07 11:31:51.083306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.441 [2024-10-07 11:31:51.083337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.441 [2024-10-07 11:31:51.083354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.441 [2024-10-07 11:31:51.083369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.441 [2024-10-07 11:31:51.083400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.441 [2024-10-07 11:31:51.083432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.441 [2024-10-07 11:31:51.093201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.441 [2024-10-07 11:31:51.093251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.441 [2024-10-07 11:31:51.093356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.441 [2024-10-07 11:31:51.093388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.441 [2024-10-07 11:31:51.093406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.441 [2024-10-07 11:31:51.093454] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.441 [2024-10-07 11:31:51.093477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.441 [2024-10-07 11:31:51.093493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.441 [2024-10-07 11:31:51.093746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.441 [2024-10-07 11:31:51.093777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.441 [2024-10-07 11:31:51.093913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.441 [2024-10-07 11:31:51.093939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.441 [2024-10-07 11:31:51.093955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.441 [2024-10-07 11:31:51.093972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.441 [2024-10-07 11:31:51.093987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.442 [2024-10-07 11:31:51.094000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.442 [2024-10-07 11:31:51.094105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.442 [2024-10-07 11:31:51.094125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.442 [2024-10-07 11:31:51.103343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.442 [2024-10-07 11:31:51.103415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.442 [2024-10-07 11:31:51.103494] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.442 [2024-10-07 11:31:51.103522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.442 [2024-10-07 11:31:51.103538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.442 [2024-10-07 11:31:51.103602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.442 [2024-10-07 11:31:51.103629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.442 [2024-10-07 11:31:51.103645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.442 [2024-10-07 11:31:51.103665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.442 [2024-10-07 11:31:51.104414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.442 [2024-10-07 11:31:51.104455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.442 [2024-10-07 11:31:51.104472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.442 [2024-10-07 11:31:51.104508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.442 [2024-10-07 11:31:51.104682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.442 [2024-10-07 11:31:51.104707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.442 [2024-10-07 11:31:51.104721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.442 [2024-10-07 11:31:51.104736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.442 [2024-10-07 11:31:51.104846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.442 [2024-10-07 11:31:51.113961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.442 [2024-10-07 11:31:51.114013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.442 [2024-10-07 11:31:51.114108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.442 [2024-10-07 11:31:51.114138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.442 [2024-10-07 11:31:51.114155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.442 [2024-10-07 11:31:51.114207] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.442 [2024-10-07 11:31:51.114230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.442 [2024-10-07 11:31:51.114246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.442 [2024-10-07 11:31:51.114278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.442 [2024-10-07 11:31:51.114327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.442 [2024-10-07 11:31:51.114360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.442 [2024-10-07 11:31:51.114378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.442 [2024-10-07 11:31:51.114393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.442 [2024-10-07 11:31:51.114409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.442 [2024-10-07 11:31:51.114423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.442 [2024-10-07 11:31:51.114436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.442 [2024-10-07 11:31:51.114466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.442 [2024-10-07 11:31:51.114483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.442 [2024-10-07 11:31:51.124486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.442 [2024-10-07 11:31:51.124537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.442 [2024-10-07 11:31:51.124631] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.442 [2024-10-07 11:31:51.124661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.442 [2024-10-07 11:31:51.124678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.442 [2024-10-07 11:31:51.124727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.442 [2024-10-07 11:31:51.124750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.442 [2024-10-07 11:31:51.124786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.442 [2024-10-07 11:31:51.124820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.442 [2024-10-07 11:31:51.124844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.442 [2024-10-07 11:31:51.124871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.442 [2024-10-07 11:31:51.124889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.442 [2024-10-07 11:31:51.124903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.442 [2024-10-07 11:31:51.124920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.442 [2024-10-07 11:31:51.124934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.442 [2024-10-07 11:31:51.124947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.442 [2024-10-07 11:31:51.124977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.442 [2024-10-07 11:31:51.124996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.442 [2024-10-07 11:31:51.135593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.442 [2024-10-07 11:31:51.135653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.442 [2024-10-07 11:31:51.135762] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.442 [2024-10-07 11:31:51.135795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.442 [2024-10-07 11:31:51.135813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.442 [2024-10-07 11:31:51.135862] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.442 [2024-10-07 11:31:51.135895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.442 [2024-10-07 11:31:51.135911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.442 [2024-10-07 11:31:51.135944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.443 [2024-10-07 11:31:51.135967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.443 [2024-10-07 11:31:51.135995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.443 [2024-10-07 11:31:51.136013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.443 [2024-10-07 11:31:51.136027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.443 [2024-10-07 11:31:51.136044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.443 [2024-10-07 11:31:51.136058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.443 [2024-10-07 11:31:51.136071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.443 [2024-10-07 11:31:51.136101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.443 [2024-10-07 11:31:51.136119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.443 [2024-10-07 11:31:51.146477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.443 [2024-10-07 11:31:51.146570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.443 [2024-10-07 11:31:51.146706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.443 [2024-10-07 11:31:51.146749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.443 [2024-10-07 11:31:51.146776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.443 [2024-10-07 11:31:51.146859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.443 [2024-10-07 11:31:51.146893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.443 [2024-10-07 11:31:51.146919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.443 [2024-10-07 11:31:51.148543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.443 [2024-10-07 11:31:51.148613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.443 [2024-10-07 11:31:51.149828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.443 [2024-10-07 11:31:51.149890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.443 [2024-10-07 11:31:51.149919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.443 [2024-10-07 11:31:51.149946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.443 [2024-10-07 11:31:51.149970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.443 [2024-10-07 11:31:51.149991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.443 [2024-10-07 11:31:51.151804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.443 [2024-10-07 11:31:51.151855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.443 [2024-10-07 11:31:51.158617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.443 [2024-10-07 11:31:51.158670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.443 [2024-10-07 11:31:51.158844] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.443 [2024-10-07 11:31:51.158878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.443 [2024-10-07 11:31:51.158896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.443 [2024-10-07 11:31:51.158944] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.443 [2024-10-07 11:31:51.158968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.443 [2024-10-07 11:31:51.158984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.443 [2024-10-07 11:31:51.159017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.443 [2024-10-07 11:31:51.159041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.443 [2024-10-07 11:31:51.159068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.443 [2024-10-07 11:31:51.159086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.443 [2024-10-07 11:31:51.159101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.443 [2024-10-07 11:31:51.159136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.443 [2024-10-07 11:31:51.159153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.443 [2024-10-07 11:31:51.159167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.443 [2024-10-07 11:31:51.159909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.443 [2024-10-07 11:31:51.159948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.443 [2024-10-07 11:31:51.169375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.443 [2024-10-07 11:31:51.169425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.443 [2024-10-07 11:31:51.169522] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.443 [2024-10-07 11:31:51.169552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.443 [2024-10-07 11:31:51.169570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.443 [2024-10-07 11:31:51.169617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.443 [2024-10-07 11:31:51.169640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.443 [2024-10-07 11:31:51.169656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.443 [2024-10-07 11:31:51.169688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.443 [2024-10-07 11:31:51.169711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.443 [2024-10-07 11:31:51.169738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.443 [2024-10-07 11:31:51.169756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.443 [2024-10-07 11:31:51.169770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.443 [2024-10-07 11:31:51.169787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.443 [2024-10-07 11:31:51.169801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.443 [2024-10-07 11:31:51.169814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.443 [2024-10-07 11:31:51.169843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.443 [2024-10-07 11:31:51.169860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.443 [2024-10-07 11:31:51.179998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.443 [2024-10-07 11:31:51.180050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.443 [2024-10-07 11:31:51.180145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.443 [2024-10-07 11:31:51.180175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.443 [2024-10-07 11:31:51.180193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.443 [2024-10-07 11:31:51.180240] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.443 [2024-10-07 11:31:51.180263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.443 [2024-10-07 11:31:51.180295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.443 [2024-10-07 11:31:51.180348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.443 [2024-10-07 11:31:51.180375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.443 [2024-10-07 11:31:51.180407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.443 [2024-10-07 11:31:51.180425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.443 [2024-10-07 11:31:51.180439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.443 [2024-10-07 11:31:51.180455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.443 [2024-10-07 11:31:51.180469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.443 [2024-10-07 11:31:51.180483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.443 [2024-10-07 11:31:51.180513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.443 [2024-10-07 11:31:51.180530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.443 [2024-10-07 11:31:51.190348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.443 [2024-10-07 11:31:51.190399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.443 [2024-10-07 11:31:51.190504] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.443 [2024-10-07 11:31:51.190536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.443 [2024-10-07 11:31:51.190554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.443 [2024-10-07 11:31:51.190611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.443 [2024-10-07 11:31:51.190634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.443 [2024-10-07 11:31:51.190650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.443 [2024-10-07 11:31:51.190903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.443 [2024-10-07 11:31:51.190935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.443 [2024-10-07 11:31:51.191072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.443 [2024-10-07 11:31:51.191097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.443 [2024-10-07 11:31:51.191112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.443 [2024-10-07 11:31:51.191130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.443 [2024-10-07 11:31:51.191144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.443 [2024-10-07 11:31:51.191157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.443 [2024-10-07 11:31:51.191263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.444 [2024-10-07 11:31:51.191283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.444 [2024-10-07 11:31:51.200479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.444 [2024-10-07 11:31:51.200542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.444 [2024-10-07 11:31:51.200665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.444 [2024-10-07 11:31:51.200696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.444 [2024-10-07 11:31:51.200713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.444 [2024-10-07 11:31:51.200764] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.444 [2024-10-07 11:31:51.200787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.444 [2024-10-07 11:31:51.200803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.444 [2024-10-07 11:31:51.200835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.444 [2024-10-07 11:31:51.200858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.444 [2024-10-07 11:31:51.200885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.444 [2024-10-07 11:31:51.200903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.444 [2024-10-07 11:31:51.200917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.444 [2024-10-07 11:31:51.200933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.444 [2024-10-07 11:31:51.200947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.444 [2024-10-07 11:31:51.200960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.444 [2024-10-07 11:31:51.201702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.444 [2024-10-07 11:31:51.201729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.444 [2024-10-07 11:31:51.211221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.444 [2024-10-07 11:31:51.211272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.444 [2024-10-07 11:31:51.211380] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.444 [2024-10-07 11:31:51.211412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.444 [2024-10-07 11:31:51.211430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.444 [2024-10-07 11:31:51.211478] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.444 [2024-10-07 11:31:51.211502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.444 [2024-10-07 11:31:51.211517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.444 [2024-10-07 11:31:51.211549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.444 [2024-10-07 11:31:51.211573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.444 [2024-10-07 11:31:51.211600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.444 [2024-10-07 11:31:51.211618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.444 [2024-10-07 11:31:51.211632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.444 [2024-10-07 11:31:51.211648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.444 [2024-10-07 11:31:51.211678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.444 [2024-10-07 11:31:51.211692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.444 [2024-10-07 11:31:51.211724] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.444 [2024-10-07 11:31:51.211742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.444 [2024-10-07 11:31:51.221783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.444 [2024-10-07 11:31:51.221833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.444 [2024-10-07 11:31:51.221929] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.444 [2024-10-07 11:31:51.221960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.444 [2024-10-07 11:31:51.221978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.444 [2024-10-07 11:31:51.222026] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.444 [2024-10-07 11:31:51.222049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.444 [2024-10-07 11:31:51.222065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.444 [2024-10-07 11:31:51.222097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.444 [2024-10-07 11:31:51.222120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.444 [2024-10-07 11:31:51.222147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.444 [2024-10-07 11:31:51.222165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.444 [2024-10-07 11:31:51.222179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.444 [2024-10-07 11:31:51.222196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.444 [2024-10-07 11:31:51.222210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.444 [2024-10-07 11:31:51.222223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.444 [2024-10-07 11:31:51.222253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.444 [2024-10-07 11:31:51.222270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.444 [2024-10-07 11:31:51.232099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.444 [2024-10-07 11:31:51.232150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.444 [2024-10-07 11:31:51.232243] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.444 [2024-10-07 11:31:51.232274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.444 [2024-10-07 11:31:51.232291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.444 [2024-10-07 11:31:51.232357] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.444 [2024-10-07 11:31:51.232382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.444 [2024-10-07 11:31:51.232399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.444 [2024-10-07 11:31:51.232672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.444 [2024-10-07 11:31:51.232705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.444 [2024-10-07 11:31:51.232842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.444 [2024-10-07 11:31:51.232867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.444 [2024-10-07 11:31:51.232882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.444 [2024-10-07 11:31:51.232900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.444 [2024-10-07 11:31:51.232914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.444 [2024-10-07 11:31:51.232927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.444 [2024-10-07 11:31:51.233032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.444 [2024-10-07 11:31:51.233052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.444 [2024-10-07 11:31:51.242225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.444 [2024-10-07 11:31:51.242309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.444 [2024-10-07 11:31:51.242404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.444 [2024-10-07 11:31:51.242433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.444 [2024-10-07 11:31:51.242450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.444 [2024-10-07 11:31:51.242516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.444 [2024-10-07 11:31:51.242543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.444 [2024-10-07 11:31:51.242560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.444 [2024-10-07 11:31:51.242586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.444 [2024-10-07 11:31:51.242620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.444 [2024-10-07 11:31:51.242641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.444 [2024-10-07 11:31:51.242655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.444 [2024-10-07 11:31:51.242668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.444 [2024-10-07 11:31:51.243405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.444 [2024-10-07 11:31:51.243434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.444 [2024-10-07 11:31:51.243449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.444 [2024-10-07 11:31:51.243463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.444 [2024-10-07 11:31:51.243651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.444 [2024-10-07 11:31:51.252950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.444 [2024-10-07 11:31:51.253000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.444 [2024-10-07 11:31:51.253093] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.444 [2024-10-07 11:31:51.253141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.444 [2024-10-07 11:31:51.253160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.444 [2024-10-07 11:31:51.253210] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.445 [2024-10-07 11:31:51.253233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.445 [2024-10-07 11:31:51.253248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.445 [2024-10-07 11:31:51.253281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.445 [2024-10-07 11:31:51.253304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.445 [2024-10-07 11:31:51.253348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.445 [2024-10-07 11:31:51.253368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.445 [2024-10-07 11:31:51.253382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.445 [2024-10-07 11:31:51.253399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.445 [2024-10-07 11:31:51.253413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.445 [2024-10-07 11:31:51.253426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.445 [2024-10-07 11:31:51.253456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.445 [2024-10-07 11:31:51.253473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.445 [2024-10-07 11:31:51.263600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.445 [2024-10-07 11:31:51.263675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.445 [2024-10-07 11:31:51.263788] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.445 [2024-10-07 11:31:51.263828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.445 [2024-10-07 11:31:51.263846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.445 [2024-10-07 11:31:51.263895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.445 [2024-10-07 11:31:51.263919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.445 [2024-10-07 11:31:51.263935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.445 [2024-10-07 11:31:51.263969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.445 [2024-10-07 11:31:51.263993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.445 [2024-10-07 11:31:51.264021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.445 [2024-10-07 11:31:51.264039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.445 [2024-10-07 11:31:51.264054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.445 [2024-10-07 11:31:51.264072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.445 [2024-10-07 11:31:51.264086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.445 [2024-10-07 11:31:51.264120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.445 [2024-10-07 11:31:51.264171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.445 [2024-10-07 11:31:51.264193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.445 [2024-10-07 11:31:51.274007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.445 [2024-10-07 11:31:51.274057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.445 [2024-10-07 11:31:51.274151] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.445 [2024-10-07 11:31:51.274182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.445 [2024-10-07 11:31:51.274200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.445 [2024-10-07 11:31:51.274249] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.445 [2024-10-07 11:31:51.274271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.445 [2024-10-07 11:31:51.274300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.445 [2024-10-07 11:31:51.274578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.445 [2024-10-07 11:31:51.274610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.445 [2024-10-07 11:31:51.274746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.445 [2024-10-07 11:31:51.274772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.445 [2024-10-07 11:31:51.274788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.445 [2024-10-07 11:31:51.274804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.445 [2024-10-07 11:31:51.274819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.445 [2024-10-07 11:31:51.274832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.445 [2024-10-07 11:31:51.274938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.445 [2024-10-07 11:31:51.274958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.445 [2024-10-07 11:31:51.284134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.445 [2024-10-07 11:31:51.284210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.445 [2024-10-07 11:31:51.284290] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.445 [2024-10-07 11:31:51.284334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.445 [2024-10-07 11:31:51.284354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.445 [2024-10-07 11:31:51.284421] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.445 [2024-10-07 11:31:51.284448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.445 [2024-10-07 11:31:51.284464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.445 [2024-10-07 11:31:51.284483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.445 [2024-10-07 11:31:51.285230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.445 [2024-10-07 11:31:51.285272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.445 [2024-10-07 11:31:51.285290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.445 [2024-10-07 11:31:51.285304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.445 [2024-10-07 11:31:51.285489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.445 [2024-10-07 11:31:51.285516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.445 [2024-10-07 11:31:51.285531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.445 [2024-10-07 11:31:51.285544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.445 [2024-10-07 11:31:51.285653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.445 [2024-10-07 11:31:51.294758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.445 [2024-10-07 11:31:51.294809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.445 [2024-10-07 11:31:51.294903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.445 [2024-10-07 11:31:51.294935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.445 [2024-10-07 11:31:51.294952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.445 [2024-10-07 11:31:51.294999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.445 [2024-10-07 11:31:51.295022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.445 [2024-10-07 11:31:51.295037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.445 [2024-10-07 11:31:51.295069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.445 [2024-10-07 11:31:51.295092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.445 [2024-10-07 11:31:51.295119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.445 [2024-10-07 11:31:51.295137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.445 [2024-10-07 11:31:51.295151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.445 [2024-10-07 11:31:51.295168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.445 [2024-10-07 11:31:51.295182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.445 [2024-10-07 11:31:51.295195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.445 [2024-10-07 11:31:51.295225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.445 [2024-10-07 11:31:51.295242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.445 [2024-10-07 11:31:51.305289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.445 [2024-10-07 11:31:51.305354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.445 [2024-10-07 11:31:51.305449] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.445 [2024-10-07 11:31:51.305480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.445 [2024-10-07 11:31:51.305515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.445 [2024-10-07 11:31:51.305569] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.445 [2024-10-07 11:31:51.305593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.445 [2024-10-07 11:31:51.305608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.445 [2024-10-07 11:31:51.305640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.445 [2024-10-07 11:31:51.305664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.445 [2024-10-07 11:31:51.305693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.445 [2024-10-07 11:31:51.305711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.445 [2024-10-07 11:31:51.305725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.445 [2024-10-07 11:31:51.305741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.446 [2024-10-07 11:31:51.305755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.446 [2024-10-07 11:31:51.305768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.446 [2024-10-07 11:31:51.305798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.446 [2024-10-07 11:31:51.305815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.446 [2024-10-07 11:31:51.315618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.446 [2024-10-07 11:31:51.315669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.446 [2024-10-07 11:31:51.315999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.446 [2024-10-07 11:31:51.316032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.446 [2024-10-07 11:31:51.316050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.446 [2024-10-07 11:31:51.316098] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.446 [2024-10-07 11:31:51.316121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.446 [2024-10-07 11:31:51.316137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.446 [2024-10-07 11:31:51.316273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.446 [2024-10-07 11:31:51.316303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.446 [2024-10-07 11:31:51.316425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.446 [2024-10-07 11:31:51.316448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.446 [2024-10-07 11:31:51.316462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.446 [2024-10-07 11:31:51.316479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.446 [2024-10-07 11:31:51.316494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.446 [2024-10-07 11:31:51.316507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.446 [2024-10-07 11:31:51.316562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.446 [2024-10-07 11:31:51.316582] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.446 [2024-10-07 11:31:51.325741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.446 [2024-10-07 11:31:51.325817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.446 [2024-10-07 11:31:51.325897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.446 [2024-10-07 11:31:51.325926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.446 [2024-10-07 11:31:51.325942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.446 [2024-10-07 11:31:51.326005] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.446 [2024-10-07 11:31:51.326032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.446 [2024-10-07 11:31:51.326048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.446 [2024-10-07 11:31:51.326066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.446 [2024-10-07 11:31:51.326820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.446 [2024-10-07 11:31:51.326863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.446 [2024-10-07 11:31:51.326880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.446 [2024-10-07 11:31:51.326894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.446 [2024-10-07 11:31:51.327066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.446 [2024-10-07 11:31:51.327092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.446 [2024-10-07 11:31:51.327107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.446 [2024-10-07 11:31:51.327120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.446 [2024-10-07 11:31:51.327210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.446 [2024-10-07 11:31:51.336267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.446 [2024-10-07 11:31:51.336330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.446 [2024-10-07 11:31:51.336427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.446 [2024-10-07 11:31:51.336458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.446 [2024-10-07 11:31:51.336475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.446 [2024-10-07 11:31:51.336524] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.446 [2024-10-07 11:31:51.336547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.446 [2024-10-07 11:31:51.336562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.446 [2024-10-07 11:31:51.336594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.446 [2024-10-07 11:31:51.336617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.446 [2024-10-07 11:31:51.336668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.446 [2024-10-07 11:31:51.336688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.446 [2024-10-07 11:31:51.336702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.446 [2024-10-07 11:31:51.336718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.446 [2024-10-07 11:31:51.336732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.446 [2024-10-07 11:31:51.336745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.446 [2024-10-07 11:31:51.336775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.446 [2024-10-07 11:31:51.336792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.446 [2024-10-07 11:31:51.346803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.446 [2024-10-07 11:31:51.346853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.446 [2024-10-07 11:31:51.346946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.446 [2024-10-07 11:31:51.346977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.446 [2024-10-07 11:31:51.346994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.446 [2024-10-07 11:31:51.347041] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.446 [2024-10-07 11:31:51.347064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.446 [2024-10-07 11:31:51.347079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.446 [2024-10-07 11:31:51.347110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.446 [2024-10-07 11:31:51.347133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.446 [2024-10-07 11:31:51.347160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.446 [2024-10-07 11:31:51.347177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.446 [2024-10-07 11:31:51.347192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.446 [2024-10-07 11:31:51.347208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.446 [2024-10-07 11:31:51.347222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.446 [2024-10-07 11:31:51.347235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.446 [2024-10-07 11:31:51.347264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.446 [2024-10-07 11:31:51.347281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.446 [2024-10-07 11:31:51.357073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.446 [2024-10-07 11:31:51.357123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.446 [2024-10-07 11:31:51.357232] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.446 [2024-10-07 11:31:51.357262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.446 [2024-10-07 11:31:51.357279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.446 [2024-10-07 11:31:51.357363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.446 [2024-10-07 11:31:51.357390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.446 [2024-10-07 11:31:51.357406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.446 [2024-10-07 11:31:51.357660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.446 [2024-10-07 11:31:51.357692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.447 [2024-10-07 11:31:51.357828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.447 [2024-10-07 11:31:51.357854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.447 [2024-10-07 11:31:51.357869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.447 [2024-10-07 11:31:51.357886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.447 [2024-10-07 11:31:51.357900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.447 [2024-10-07 11:31:51.357914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.447 [2024-10-07 11:31:51.358019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.447 [2024-10-07 11:31:51.358040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.447 [2024-10-07 11:31:51.367225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.447 [2024-10-07 11:31:51.367315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.447 [2024-10-07 11:31:51.367408] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.447 [2024-10-07 11:31:51.367436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.447 [2024-10-07 11:31:51.367453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.447 [2024-10-07 11:31:51.367519] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.447 [2024-10-07 11:31:51.367546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.447 [2024-10-07 11:31:51.367562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.447 [2024-10-07 11:31:51.367580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.447 [2024-10-07 11:31:51.368306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.447 [2024-10-07 11:31:51.368360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.447 [2024-10-07 11:31:51.368378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.447 [2024-10-07 11:31:51.368392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.447 [2024-10-07 11:31:51.368565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.447 [2024-10-07 11:31:51.368591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.447 [2024-10-07 11:31:51.368605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.447 [2024-10-07 11:31:51.368619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.447 [2024-10-07 11:31:51.368729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.447 [2024-10-07 11:31:51.377851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.447 [2024-10-07 11:31:51.377912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.447 [2024-10-07 11:31:51.378017] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.447 [2024-10-07 11:31:51.378049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.447 [2024-10-07 11:31:51.378066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.447 [2024-10-07 11:31:51.378115] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.447 [2024-10-07 11:31:51.378138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.447 [2024-10-07 11:31:51.378154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.447 [2024-10-07 11:31:51.378187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.447 [2024-10-07 11:31:51.378211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.447 [2024-10-07 11:31:51.378239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.447 [2024-10-07 11:31:51.378257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.447 [2024-10-07 11:31:51.378271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.447 [2024-10-07 11:31:51.378299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.447 [2024-10-07 11:31:51.378337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.447 [2024-10-07 11:31:51.378353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.447 [2024-10-07 11:31:51.378386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.447 [2024-10-07 11:31:51.378404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.447 [2024-10-07 11:31:51.388485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.447 [2024-10-07 11:31:51.388537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.447 [2024-10-07 11:31:51.388633] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.447 [2024-10-07 11:31:51.388664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.447 [2024-10-07 11:31:51.388681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.447 [2024-10-07 11:31:51.388729] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.447 [2024-10-07 11:31:51.388752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.447 [2024-10-07 11:31:51.388768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.447 [2024-10-07 11:31:51.388801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.447 [2024-10-07 11:31:51.388825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.447 [2024-10-07 11:31:51.388851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.447 [2024-10-07 11:31:51.388870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.447 [2024-10-07 11:31:51.388907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.447 [2024-10-07 11:31:51.388925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.447 [2024-10-07 11:31:51.388940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.447 [2024-10-07 11:31:51.388953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.447 [2024-10-07 11:31:51.388984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.447 [2024-10-07 11:31:51.389002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.447 [2024-10-07 11:31:51.398877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.447 [2024-10-07 11:31:51.398928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.447 [2024-10-07 11:31:51.399026] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.447 [2024-10-07 11:31:51.399057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.447 [2024-10-07 11:31:51.399075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.447 [2024-10-07 11:31:51.399123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.447 [2024-10-07 11:31:51.399146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.447 [2024-10-07 11:31:51.399162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.447 [2024-10-07 11:31:51.399430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.447 [2024-10-07 11:31:51.399462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.447 [2024-10-07 11:31:51.399598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.447 [2024-10-07 11:31:51.399623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.447 [2024-10-07 11:31:51.399638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.447 [2024-10-07 11:31:51.399655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.447 [2024-10-07 11:31:51.399670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.447 [2024-10-07 11:31:51.399683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.447 [2024-10-07 11:31:51.399788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.447 [2024-10-07 11:31:51.399808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.447 [2024-10-07 11:31:51.409003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.447 [2024-10-07 11:31:51.409078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.447 [2024-10-07 11:31:51.409157] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.447 [2024-10-07 11:31:51.409186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.447 [2024-10-07 11:31:51.409202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.447 [2024-10-07 11:31:51.409266] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.447 [2024-10-07 11:31:51.409311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.447 [2024-10-07 11:31:51.409348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.447 [2024-10-07 11:31:51.409368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.447 [2024-10-07 11:31:51.410096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.447 [2024-10-07 11:31:51.410138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.447 [2024-10-07 11:31:51.410156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.447 [2024-10-07 11:31:51.410170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.447 [2024-10-07 11:31:51.410385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.447 [2024-10-07 11:31:51.410414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.447 [2024-10-07 11:31:51.410428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.448 [2024-10-07 11:31:51.410442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.448 [2024-10-07 11:31:51.410533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.448 [2024-10-07 11:31:51.419589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.448 [2024-10-07 11:31:51.419640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.448 [2024-10-07 11:31:51.419735] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.448 [2024-10-07 11:31:51.419766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.448 [2024-10-07 11:31:51.419783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.448 [2024-10-07 11:31:51.419831] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.448 [2024-10-07 11:31:51.419854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.448 [2024-10-07 11:31:51.419869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.448 [2024-10-07 11:31:51.419901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.448 [2024-10-07 11:31:51.419924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.448 [2024-10-07 11:31:51.419952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.448 [2024-10-07 11:31:51.419969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.448 [2024-10-07 11:31:51.419984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.448 [2024-10-07 11:31:51.420000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.448 [2024-10-07 11:31:51.420014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.448 [2024-10-07 11:31:51.420029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.448 [2024-10-07 11:31:51.420059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.448 [2024-10-07 11:31:51.420076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.448 [2024-10-07 11:31:51.430107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.448 [2024-10-07 11:31:51.430158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.448 [2024-10-07 11:31:51.430253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.448 [2024-10-07 11:31:51.430298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.448 [2024-10-07 11:31:51.430332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.448 [2024-10-07 11:31:51.430387] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.448 [2024-10-07 11:31:51.430412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.448 [2024-10-07 11:31:51.430428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.448 [2024-10-07 11:31:51.430461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.448 [2024-10-07 11:31:51.430485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.448 [2024-10-07 11:31:51.430511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.448 [2024-10-07 11:31:51.430529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.448 [2024-10-07 11:31:51.430543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.448 [2024-10-07 11:31:51.430559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.448 [2024-10-07 11:31:51.430573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.448 [2024-10-07 11:31:51.430586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.448 [2024-10-07 11:31:51.430615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.448 [2024-10-07 11:31:51.430632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.448 [2024-10-07 11:31:51.440463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.448 [2024-10-07 11:31:51.440514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.448 [2024-10-07 11:31:51.440827] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.448 [2024-10-07 11:31:51.440870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.448 [2024-10-07 11:31:51.440889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.448 [2024-10-07 11:31:51.440940] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.448 [2024-10-07 11:31:51.440963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.448 [2024-10-07 11:31:51.440979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.448 [2024-10-07 11:31:51.441117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.448 [2024-10-07 11:31:51.441146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.448 [2024-10-07 11:31:51.441249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.448 [2024-10-07 11:31:51.441270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.448 [2024-10-07 11:31:51.441303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.448 [2024-10-07 11:31:51.441338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.448 [2024-10-07 11:31:51.441357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.448 [2024-10-07 11:31:51.441370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.448 [2024-10-07 11:31:51.441411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.448 [2024-10-07 11:31:51.441430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.448 [2024-10-07 11:31:51.450588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.448 [2024-10-07 11:31:51.450661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.448 [2024-10-07 11:31:51.450740] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.448 [2024-10-07 11:31:51.450769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.448 [2024-10-07 11:31:51.450785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.448 [2024-10-07 11:31:51.450849] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.448 [2024-10-07 11:31:51.450876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.448 [2024-10-07 11:31:51.450893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.448 [2024-10-07 11:31:51.450913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.448 [2024-10-07 11:31:51.451655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.448 [2024-10-07 11:31:51.451697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.448 [2024-10-07 11:31:51.451715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.448 [2024-10-07 11:31:51.451729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.448 [2024-10-07 11:31:51.451901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.448 [2024-10-07 11:31:51.451926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.448 [2024-10-07 11:31:51.451941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.448 [2024-10-07 11:31:51.451955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.448 [2024-10-07 11:31:51.452063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.448 [2024-10-07 11:31:51.461118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.448 [2024-10-07 11:31:51.461167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.448 [2024-10-07 11:31:51.461260] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.448 [2024-10-07 11:31:51.461291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.448 [2024-10-07 11:31:51.461308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.448 [2024-10-07 11:31:51.461376] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.448 [2024-10-07 11:31:51.461400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.448 [2024-10-07 11:31:51.461434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.448 [2024-10-07 11:31:51.461469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.448 [2024-10-07 11:31:51.461492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.448 [2024-10-07 11:31:51.461520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.448 [2024-10-07 11:31:51.461538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.448 [2024-10-07 11:31:51.461552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.448 [2024-10-07 11:31:51.461569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.448 [2024-10-07 11:31:51.461583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.448 [2024-10-07 11:31:51.461596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.448 [2024-10-07 11:31:51.461625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.448 [2024-10-07 11:31:51.461642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.448 [2024-10-07 11:31:51.471625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.448 [2024-10-07 11:31:51.471675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.448 [2024-10-07 11:31:51.471767] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.448 [2024-10-07 11:31:51.471798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.448 [2024-10-07 11:31:51.471815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.448 [2024-10-07 11:31:51.471863] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.449 [2024-10-07 11:31:51.471885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.449 [2024-10-07 11:31:51.471901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.449 [2024-10-07 11:31:51.471933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.449 [2024-10-07 11:31:51.471956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.449 [2024-10-07 11:31:51.471983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.449 [2024-10-07 11:31:51.472001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.449 [2024-10-07 11:31:51.472015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.449 [2024-10-07 11:31:51.472031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.449 [2024-10-07 11:31:51.472045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.449 [2024-10-07 11:31:51.472058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.449 [2024-10-07 11:31:51.472088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.449 [2024-10-07 11:31:51.472105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.449 [2024-10-07 11:31:51.481919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.449 [2024-10-07 11:31:51.481968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.449 [2024-10-07 11:31:51.482077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.449 [2024-10-07 11:31:51.482108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.449 [2024-10-07 11:31:51.482126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.449 [2024-10-07 11:31:51.482174] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.449 [2024-10-07 11:31:51.482197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.449 [2024-10-07 11:31:51.482213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.449 [2024-10-07 11:31:51.482509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.449 [2024-10-07 11:31:51.482544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.449 [2024-10-07 11:31:51.482681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.449 [2024-10-07 11:31:51.482706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.449 [2024-10-07 11:31:51.482721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.449 [2024-10-07 11:31:51.482738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.449 [2024-10-07 11:31:51.482753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.449 [2024-10-07 11:31:51.482766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.449 [2024-10-07 11:31:51.482871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.449 [2024-10-07 11:31:51.482891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.449 [2024-10-07 11:31:51.492062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.449 [2024-10-07 11:31:51.492143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.449 [2024-10-07 11:31:51.492228] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.449 [2024-10-07 11:31:51.492257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.449 [2024-10-07 11:31:51.492275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.449 [2024-10-07 11:31:51.492355] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.449 [2024-10-07 11:31:51.492384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.449 [2024-10-07 11:31:51.492400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.449 [2024-10-07 11:31:51.492419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.449 [2024-10-07 11:31:51.493155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.449 [2024-10-07 11:31:51.493196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.449 [2024-10-07 11:31:51.493215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.449 [2024-10-07 11:31:51.493229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.449 [2024-10-07 11:31:51.493419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.449 [2024-10-07 11:31:51.493463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.449 [2024-10-07 11:31:51.493480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.449 [2024-10-07 11:31:51.493494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.449 [2024-10-07 11:31:51.493588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.449 8975.67 IOPS, 35.06 MiB/s [2024-10-07T11:31:52.972Z] [2024-10-07 11:31:51.502163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.449 [2024-10-07 11:31:51.502329] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.449 [2024-10-07 11:31:51.502363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.449 [2024-10-07 11:31:51.502382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.449 [2024-10-07 11:31:51.502431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.449 [2024-10-07 11:31:51.502472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.449 [2024-10-07 11:31:51.502503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.449 [2024-10-07 11:31:51.502521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.449 [2024-10-07 11:31:51.502535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.449 [2024-10-07 11:31:51.502564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.449 [2024-10-07 11:31:51.502622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.449 [2024-10-07 11:31:51.502648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.449 [2024-10-07 11:31:51.502664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.449 [2024-10-07 11:31:51.502695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.449 [2024-10-07 11:31:51.502727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.449 [2024-10-07 11:31:51.502744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.449 [2024-10-07 11:31:51.502759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.449 [2024-10-07 11:31:51.502791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.449 00:20:57.449 Latency(us) 00:20:57.449 [2024-10-07T11:31:52.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.449 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:57.449 Verification LBA range: start 0x0 length 0x4000 00:20:57.449 NVMe0n1 : 15.01 8976.11 35.06 0.00 0.00 14227.40 1459.67 17992.61 00:20:57.449 [2024-10-07T11:31:52.972Z] =================================================================================================================== 00:20:57.449 [2024-10-07T11:31:52.972Z] Total : 8976.11 35.06 0.00 0.00 14227.40 1459.67 17992.61 00:20:57.449 [2024-10-07 11:31:51.512243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.449 [2024-10-07 11:31:51.512379] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.449 [2024-10-07 11:31:51.512412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.449 [2024-10-07 11:31:51.512453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.449 [2024-10-07 11:31:51.512495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.449 [2024-10-07 11:31:51.512520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.449 [2024-10-07 11:31:51.512536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.449 [2024-10-07 11:31:51.512550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.449 [2024-10-07 11:31:51.512574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.449 [2024-10-07 11:31:51.512594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.449 [2024-10-07 11:31:51.512662] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.449 [2024-10-07 11:31:51.512689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.449 [2024-10-07 11:31:51.512706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.449 [2024-10-07 11:31:51.512726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.449 [2024-10-07 11:31:51.512752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.449 [2024-10-07 11:31:51.512766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.449 [2024-10-07 11:31:51.512780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.449 [2024-10-07 11:31:51.512797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.449 [2024-10-07 11:31:51.522332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.449 [2024-10-07 11:31:51.522423] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.449 [2024-10-07 11:31:51.522452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.449 [2024-10-07 11:31:51.522469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.449 [2024-10-07 11:31:51.522489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.450 [2024-10-07 11:31:51.522508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.450 [2024-10-07 11:31:51.522523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.450 [2024-10-07 11:31:51.522536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.450 [2024-10-07 11:31:51.522553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.450 [2024-10-07 11:31:51.522632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.450 [2024-10-07 11:31:51.522695] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.450 [2024-10-07 11:31:51.522721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.450 [2024-10-07 11:31:51.522737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.450 [2024-10-07 11:31:51.522756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.450 [2024-10-07 11:31:51.522775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.450 [2024-10-07 11:31:51.522803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.450 [2024-10-07 11:31:51.522818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.450 [2024-10-07 11:31:51.522835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.450 [2024-10-07 11:31:51.532392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.450 [2024-10-07 11:31:51.532482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.450 [2024-10-07 11:31:51.532510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.450 [2024-10-07 11:31:51.532527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.450 [2024-10-07 11:31:51.532547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.450 [2024-10-07 11:31:51.532566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.450 [2024-10-07 11:31:51.532581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.450 [2024-10-07 11:31:51.532595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.450 [2024-10-07 11:31:51.532611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.450 [2024-10-07 11:31:51.532670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.450 [2024-10-07 11:31:51.532734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.450 [2024-10-07 11:31:51.532760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.450 [2024-10-07 11:31:51.532776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.450 [2024-10-07 11:31:51.532795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.450 [2024-10-07 11:31:51.532814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.450 [2024-10-07 11:31:51.532828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.450 [2024-10-07 11:31:51.532841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.450 [2024-10-07 11:31:51.532858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.450 [2024-10-07 11:31:51.542450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.450 [2024-10-07 11:31:51.542537] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.450 [2024-10-07 11:31:51.542565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.450 [2024-10-07 11:31:51.542581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.450 [2024-10-07 11:31:51.542601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.450 [2024-10-07 11:31:51.542620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.450 [2024-10-07 11:31:51.542635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.450 [2024-10-07 11:31:51.542648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.450 [2024-10-07 11:31:51.542665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.450 [2024-10-07 11:31:51.542706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.450 [2024-10-07 11:31:51.542784] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.450 [2024-10-07 11:31:51.542810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.450 [2024-10-07 11:31:51.542826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.450 [2024-10-07 11:31:51.542845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.450 [2024-10-07 11:31:51.542864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.450 [2024-10-07 11:31:51.542879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.450 [2024-10-07 11:31:51.542892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.450 [2024-10-07 11:31:51.542909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.450 [2024-10-07 11:31:51.552507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.450 [2024-10-07 11:31:51.552593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.450 [2024-10-07 11:31:51.552621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.450 [2024-10-07 11:31:51.552637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.450 [2024-10-07 11:31:51.552657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.450 [2024-10-07 11:31:51.552676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.450 [2024-10-07 11:31:51.552690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.450 [2024-10-07 11:31:51.552704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.450 [2024-10-07 11:31:51.552720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.450 [2024-10-07 11:31:51.552755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.450 [2024-10-07 11:31:51.552819] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.450 [2024-10-07 11:31:51.552845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.450 [2024-10-07 11:31:51.552861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.450 [2024-10-07 11:31:51.552880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.450 [2024-10-07 11:31:51.552899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.450 [2024-10-07 11:31:51.552913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.450 [2024-10-07 11:31:51.552926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.450 [2024-10-07 11:31:51.552942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.450 Received shutdown signal, test time was about 15.000000 seconds 00:20:57.450 00:20:57.450 Latency(us) 00:20:57.450 [2024-10-07T11:31:52.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.450 [2024-10-07T11:31:52.973Z] =================================================================================================================== 00:20:57.450 [2024-10-07T11:31:52.973Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.450 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:57.450 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=1 00:20:57.450 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:20:57.450 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@68 -- # false 00:20:57.450 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@68 -- # trap - ERR 00:20:57.450 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@68 -- # print_backtrace 00:20:57.450 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:20:57.450 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # args=('--transport=tcp') 00:20:57.450 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # local args 00:20:57.450 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1157 -- # xtrace_disable 00:20:57.450 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:57.450 ========== Backtrace start: ========== 00:20:57.450 00:20:57.450 in /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh:68 -> main(["--transport=tcp"]) 00:20:57.450 ... 00:20:57.450 63 cat $testdir/try.txt 00:20:57.450 64 # if this test fails it means we didn't fail over to the second 00:20:57.450 65 count="$(grep -c "Resetting controller successful" < $testdir/try.txt)" 00:20:57.450 66 00:20:57.450 67 if ((count != 3)); then 00:20:57.450 => 68 false 00:20:57.450 69 fi 00:20:57.450 70 00:20:57.450 71 # Part 2 of the test. Start removing ports, starting with the one we are connected to, confirm that the ctrlr remains active until the final trid is removed. 00:20:57.450 72 $rootdir/build/examples/bdevperf -z -r $bdevperf_rpc_sock -q 128 -o 4096 -w verify -t 1 -f &> $testdir/try.txt & 00:20:57.450 73 bdevperf_pid=$! 00:20:57.450 ... 00:20:57.451 00:20:57.451 ========== Backtrace end ========== 00:20:57.451 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1194 -- # return 0 00:20:57.451 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # process_shm --id 0 00:20:57.451 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@808 -- # type=--id 00:20:57.451 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@809 -- # id=0 00:20:57.451 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:57.451 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:57.451 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:57.451 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:57.451 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:57.451 11:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:57.451 nvmf_trace.0 00:20:57.451 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@823 -- # return 0 00:20:57.451 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:57.451 [2024-10-07 11:31:34.593419] Starting SPDK v25.01-pre git sha1 2a4f56c54 / DPDK 24.03.0 initialization... 00:20:57.451 [2024-10-07 11:31:34.593522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75490 ] 00:20:57.451 [2024-10-07 11:31:34.728619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.451 [2024-10-07 11:31:34.843771] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.451 [2024-10-07 11:31:34.897824] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:57.451 Running I/O for 15 seconds... 00:20:57.451 6820.00 IOPS, 26.64 MiB/s [2024-10-07T11:31:52.974Z] [2024-10-07 11:31:37.651762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.451 [2024-10-07 11:31:37.651867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.651891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.451 [2024-10-07 11:31:37.651906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.651921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.451 [2024-10-07 11:31:37.651943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.651959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.451 [2024-10-07 11:31:37.651973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.651988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.451 [2024-10-07 11:31:37.652066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.652747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.451 [2024-10-07 11:31:37.652778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.451 [2024-10-07 11:31:37.652808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.451 [2024-10-07 11:31:37.652839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.451 [2024-10-07 11:31:37.652869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.451 [2024-10-07 11:31:37.652899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.451 [2024-10-07 11:31:37.652930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.451 [2024-10-07 11:31:37.652965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.652981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.451 [2024-10-07 11:31:37.652995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.653011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.451 [2024-10-07 11:31:37.653026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.451 [2024-10-07 11:31:37.653042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.452 [2024-10-07 11:31:37.653056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.452 [2024-10-07 11:31:37.653086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.452 [2024-10-07 11:31:37.653124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.452 [2024-10-07 11:31:37.653165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.452 [2024-10-07 11:31:37.653194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.452 [2024-10-07 11:31:37.653225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.452 [2024-10-07 11:31:37.653255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.653973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.653994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.654009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.654025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.654040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.654056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.452 [2024-10-07 11:31:37.654070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.654087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.452 [2024-10-07 11:31:37.654100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.654116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.452 [2024-10-07 11:31:37.654130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.654147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.452 [2024-10-07 11:31:37.654161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.654177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.452 [2024-10-07 11:31:37.654191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.654213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.452 [2024-10-07 11:31:37.654227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.654243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.452 [2024-10-07 11:31:37.654257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.654273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.452 [2024-10-07 11:31:37.654297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.654326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.654343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.654368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.654384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.654400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.452 [2024-10-07 11:31:37.654414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.452 [2024-10-07 11:31:37.654430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.654444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.654460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.654474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.654490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.654504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.654525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.654540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.654556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.654571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.654588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.654602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.654618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.654633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.654650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.654665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.654681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.654696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.654712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.654727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.654743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.654757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.654780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.654795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.654811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.654828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.654844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.453 [2024-10-07 11:31:37.654867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.654884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.453 [2024-10-07 11:31:37.654899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.654915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.453 [2024-10-07 11:31:37.654929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.654945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.453 [2024-10-07 11:31:37.654959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.654975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.453 [2024-10-07 11:31:37.654989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.453 [2024-10-07 11:31:37.655019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.453 [2024-10-07 11:31:37.655054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.453 [2024-10-07 11:31:37.655084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.453 [2024-10-07 11:31:37.655731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.453 [2024-10-07 11:31:37.655761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.453 [2024-10-07 11:31:37.655776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.454 [2024-10-07 11:31:37.655790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.454 [2024-10-07 11:31:37.655806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.454 [2024-10-07 11:31:37.655820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.454 [2024-10-07 11:31:37.655836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.454 [2024-10-07 11:31:37.655850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.454 [2024-10-07 11:31:37.655866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.454 [2024-10-07 11:31:37.655885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.454 [2024-10-07 11:31:37.655901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.454 [2024-10-07 11:31:37.655915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.454 [2024-10-07 11:31:37.655931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.454 [2024-10-07 11:31:37.655945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.454 [2024-10-07 11:31:37.655961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.454 [2024-10-07 11:31:37.655981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.454 [2024-10-07 11:31:37.655998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.454 [2024-10-07 11:31:37.656012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.454 [2024-10-07 11:31:37.656028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.454 [2024-10-07 11:31:37.656042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.454 [2024-10-07 11:31:37.656063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.454 [2024-10-07 11:31:37.656078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.454 [2024-10-07 11:31:37.656094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.454 [2024-10-07 11:31:37.656108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.454 [2024-10-07 11:31:37.656124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.454 [2024-10-07 11:31:37.656138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.454 [2024-10-07 11:31:37.656154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.454 [2024-10-07 11:31:37.656168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.454 [2024-10-07 11:31:37.656184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.454 [2024-10-07 11:31:37.656198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.454 [2024-10-07 11:31:37.656256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.454 [2024-10-07 11:31:37.656273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.454 [2024-10-07 11:31:37.656285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66296 len:8 PRP1 0x0 PRP2 0x0 00:20:57.454 [2024-10-07 11:31:37.656299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.454 [2024-10-07 11:31:37.656372] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe31770 was disconnected and freed. reset controller. 00:20:57.454 [2024-10-07 11:31:37.657441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.454 [2024-10-07 11:31:37.657502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.454 [2024-10-07 11:31:37.657847] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.454 [2024-10-07 11:31:37.657881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.454 [2024-10-07 11:31:37.657899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.454 [2024-10-07 11:31:37.657951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.454 [2024-10-07 11:31:37.657995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.454 [2024-10-07 11:31:37.658028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.454 [2024-10-07 11:31:37.658045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.454 [2024-10-07 11:31:37.658079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.454 [2024-10-07 11:31:37.668379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.454 [2024-10-07 11:31:37.668526] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.454 [2024-10-07 11:31:37.668561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.454 [2024-10-07 11:31:37.668580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.454 [2024-10-07 11:31:37.668630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.454 [2024-10-07 11:31:37.668668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.454 [2024-10-07 11:31:37.668687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.454 [2024-10-07 11:31:37.668702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.454 [2024-10-07 11:31:37.668733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.454 [2024-10-07 11:31:37.678465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.454 [2024-10-07 11:31:37.678583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.454 [2024-10-07 11:31:37.678615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.454 [2024-10-07 11:31:37.678632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.454 [2024-10-07 11:31:37.678664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.454 [2024-10-07 11:31:37.678697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.454 [2024-10-07 11:31:37.678715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.454 [2024-10-07 11:31:37.678729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.454 [2024-10-07 11:31:37.678758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.454 [2024-10-07 11:31:37.690321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.454 [2024-10-07 11:31:37.690455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.454 [2024-10-07 11:31:37.690488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.454 [2024-10-07 11:31:37.690505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.454 [2024-10-07 11:31:37.690538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.454 [2024-10-07 11:31:37.690570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.454 [2024-10-07 11:31:37.690588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.454 [2024-10-07 11:31:37.690602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.454 [2024-10-07 11:31:37.690633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.454 [2024-10-07 11:31:37.700465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.454 [2024-10-07 11:31:37.700617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.454 [2024-10-07 11:31:37.700649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.454 [2024-10-07 11:31:37.700667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.454 [2024-10-07 11:31:37.700699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.454 [2024-10-07 11:31:37.700732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.454 [2024-10-07 11:31:37.700750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.454 [2024-10-07 11:31:37.700764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.454 [2024-10-07 11:31:37.700794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.454 [2024-10-07 11:31:37.711464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.454 [2024-10-07 11:31:37.711730] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.454 [2024-10-07 11:31:37.711791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.454 [2024-10-07 11:31:37.711828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.454 [2024-10-07 11:31:37.713393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.455 [2024-10-07 11:31:37.714855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.455 [2024-10-07 11:31:37.714919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.455 [2024-10-07 11:31:37.714953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.455 [2024-10-07 11:31:37.715216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.455 [2024-10-07 11:31:37.721584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.455 [2024-10-07 11:31:37.721863] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.455 [2024-10-07 11:31:37.721909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.455 [2024-10-07 11:31:37.721929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.455 [2024-10-07 11:31:37.722063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.455 [2024-10-07 11:31:37.722189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.455 [2024-10-07 11:31:37.722220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.455 [2024-10-07 11:31:37.722236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.455 [2024-10-07 11:31:37.722337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.455 [2024-10-07 11:31:37.732995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.455 [2024-10-07 11:31:37.733117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.455 [2024-10-07 11:31:37.733149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.455 [2024-10-07 11:31:37.733167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.455 [2024-10-07 11:31:37.733219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.455 [2024-10-07 11:31:37.733252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.455 [2024-10-07 11:31:37.733271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.455 [2024-10-07 11:31:37.733285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.455 [2024-10-07 11:31:37.733330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.455 [2024-10-07 11:31:37.743092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.455 [2024-10-07 11:31:37.743209] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.455 [2024-10-07 11:31:37.743241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.455 [2024-10-07 11:31:37.743258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.455 [2024-10-07 11:31:37.743290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.455 [2024-10-07 11:31:37.743339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.455 [2024-10-07 11:31:37.743360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.455 [2024-10-07 11:31:37.743375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.455 [2024-10-07 11:31:37.743406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.455 [2024-10-07 11:31:37.753712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.455 [2024-10-07 11:31:37.753837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.455 [2024-10-07 11:31:37.753869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.455 [2024-10-07 11:31:37.753886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.455 [2024-10-07 11:31:37.753918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.455 [2024-10-07 11:31:37.753951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.455 [2024-10-07 11:31:37.753968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.455 [2024-10-07 11:31:37.753983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.455 [2024-10-07 11:31:37.754013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.455 [2024-10-07 11:31:37.763808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.455 [2024-10-07 11:31:37.764093] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.455 [2024-10-07 11:31:37.764138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.455 [2024-10-07 11:31:37.764158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.455 [2024-10-07 11:31:37.764292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.455 [2024-10-07 11:31:37.764434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.455 [2024-10-07 11:31:37.764462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.455 [2024-10-07 11:31:37.764492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.455 [2024-10-07 11:31:37.764552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.455 [2024-10-07 11:31:37.775120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.455 [2024-10-07 11:31:37.775239] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.455 [2024-10-07 11:31:37.775271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.455 [2024-10-07 11:31:37.775288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.455 [2024-10-07 11:31:37.775337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.455 [2024-10-07 11:31:37.775373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.455 [2024-10-07 11:31:37.775391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.455 [2024-10-07 11:31:37.775405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.455 [2024-10-07 11:31:37.775436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.455 [2024-10-07 11:31:37.785215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.455 [2024-10-07 11:31:37.785344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.455 [2024-10-07 11:31:37.785377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.455 [2024-10-07 11:31:37.785394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.455 [2024-10-07 11:31:37.785427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.455 [2024-10-07 11:31:37.785459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.455 [2024-10-07 11:31:37.785476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.455 [2024-10-07 11:31:37.785490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.455 [2024-10-07 11:31:37.785520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.455 [2024-10-07 11:31:37.795969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.455 [2024-10-07 11:31:37.796109] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.455 [2024-10-07 11:31:37.796140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.455 [2024-10-07 11:31:37.796159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.455 [2024-10-07 11:31:37.796191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.455 [2024-10-07 11:31:37.796222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.455 [2024-10-07 11:31:37.796240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.455 [2024-10-07 11:31:37.796254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.455 [2024-10-07 11:31:37.796285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.455 [2024-10-07 11:31:37.806063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.455 [2024-10-07 11:31:37.806362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.455 [2024-10-07 11:31:37.806411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.455 [2024-10-07 11:31:37.806430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.455 [2024-10-07 11:31:37.806564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.455 [2024-10-07 11:31:37.806700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.455 [2024-10-07 11:31:37.806724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.455 [2024-10-07 11:31:37.806739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.455 [2024-10-07 11:31:37.806795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.455 [2024-10-07 11:31:37.817330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.455 [2024-10-07 11:31:37.817449] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.455 [2024-10-07 11:31:37.817481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.455 [2024-10-07 11:31:37.817498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.455 [2024-10-07 11:31:37.817529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.455 [2024-10-07 11:31:37.817561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.455 [2024-10-07 11:31:37.817580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.455 [2024-10-07 11:31:37.817594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.455 [2024-10-07 11:31:37.817624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.455 [2024-10-07 11:31:37.827439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.455 [2024-10-07 11:31:37.827555] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.455 [2024-10-07 11:31:37.827586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.455 [2024-10-07 11:31:37.827603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.456 [2024-10-07 11:31:37.827635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.456 [2024-10-07 11:31:37.827668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.456 [2024-10-07 11:31:37.827685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.456 [2024-10-07 11:31:37.827699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.456 [2024-10-07 11:31:37.827729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.456 [2024-10-07 11:31:37.838222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.456 [2024-10-07 11:31:37.838369] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.456 [2024-10-07 11:31:37.838402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.456 [2024-10-07 11:31:37.838419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.456 [2024-10-07 11:31:37.838454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.456 [2024-10-07 11:31:37.838502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.456 [2024-10-07 11:31:37.838522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.456 [2024-10-07 11:31:37.838536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.456 [2024-10-07 11:31:37.838567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.456 [2024-10-07 11:31:37.848311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.456 [2024-10-07 11:31:37.848439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.456 [2024-10-07 11:31:37.848470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.456 [2024-10-07 11:31:37.848488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.456 [2024-10-07 11:31:37.848673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.456 [2024-10-07 11:31:37.848826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.456 [2024-10-07 11:31:37.848852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.456 [2024-10-07 11:31:37.848866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.456 [2024-10-07 11:31:37.848981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.456 [2024-10-07 11:31:37.859672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.456 [2024-10-07 11:31:37.859798] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.456 [2024-10-07 11:31:37.859830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.456 [2024-10-07 11:31:37.859847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.456 [2024-10-07 11:31:37.859878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.456 [2024-10-07 11:31:37.859910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.456 [2024-10-07 11:31:37.859928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.456 [2024-10-07 11:31:37.859941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.456 [2024-10-07 11:31:37.859971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.456 [2024-10-07 11:31:37.869797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.456 [2024-10-07 11:31:37.869914] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.456 [2024-10-07 11:31:37.869945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.456 [2024-10-07 11:31:37.869962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.456 [2024-10-07 11:31:37.869994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.456 [2024-10-07 11:31:37.870026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.456 [2024-10-07 11:31:37.870044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.456 [2024-10-07 11:31:37.870058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.456 [2024-10-07 11:31:37.870104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.456 [2024-10-07 11:31:37.880312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.456 [2024-10-07 11:31:37.880464] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.456 [2024-10-07 11:31:37.880495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.456 [2024-10-07 11:31:37.880513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.456 [2024-10-07 11:31:37.880545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.456 [2024-10-07 11:31:37.880578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.456 [2024-10-07 11:31:37.880596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.456 [2024-10-07 11:31:37.880610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.456 [2024-10-07 11:31:37.880640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.456 [2024-10-07 11:31:37.890418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.456 [2024-10-07 11:31:37.890533] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.456 [2024-10-07 11:31:37.890564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.456 [2024-10-07 11:31:37.890581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.456 [2024-10-07 11:31:37.890613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.456 [2024-10-07 11:31:37.890644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.456 [2024-10-07 11:31:37.890662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.456 [2024-10-07 11:31:37.890677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.456 [2024-10-07 11:31:37.890724] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.456 [2024-10-07 11:31:37.901804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.456 [2024-10-07 11:31:37.901922] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.456 [2024-10-07 11:31:37.901954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.456 [2024-10-07 11:31:37.901971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.456 [2024-10-07 11:31:37.902003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.456 [2024-10-07 11:31:37.902035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.456 [2024-10-07 11:31:37.902053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.456 [2024-10-07 11:31:37.902067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.456 [2024-10-07 11:31:37.902097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.456 [2024-10-07 11:31:37.911895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.456 [2024-10-07 11:31:37.912010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.456 [2024-10-07 11:31:37.912041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.456 [2024-10-07 11:31:37.912079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.456 [2024-10-07 11:31:37.912113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.456 [2024-10-07 11:31:37.912160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.456 [2024-10-07 11:31:37.912180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.456 [2024-10-07 11:31:37.912195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.456 [2024-10-07 11:31:37.913407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.456 [2024-10-07 11:31:37.922270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.456 [2024-10-07 11:31:37.922426] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.456 [2024-10-07 11:31:37.922459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.456 [2024-10-07 11:31:37.922476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.456 [2024-10-07 11:31:37.922509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.456 [2024-10-07 11:31:37.922540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.456 [2024-10-07 11:31:37.922558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.456 [2024-10-07 11:31:37.922572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.456 [2024-10-07 11:31:37.922603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.456 [2024-10-07 11:31:37.932394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.456 [2024-10-07 11:31:37.932674] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.456 [2024-10-07 11:31:37.932718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.456 [2024-10-07 11:31:37.932737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.456 [2024-10-07 11:31:37.932872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.456 [2024-10-07 11:31:37.932997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.456 [2024-10-07 11:31:37.933022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.456 [2024-10-07 11:31:37.933038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.456 [2024-10-07 11:31:37.933093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.456 [2024-10-07 11:31:37.943415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.456 [2024-10-07 11:31:37.943532] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.457 [2024-10-07 11:31:37.943564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.457 [2024-10-07 11:31:37.943581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.457 [2024-10-07 11:31:37.943613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.457 [2024-10-07 11:31:37.943646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.457 [2024-10-07 11:31:37.943678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.457 [2024-10-07 11:31:37.943693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.457 [2024-10-07 11:31:37.943725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.457 [2024-10-07 11:31:37.953506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.457 [2024-10-07 11:31:37.953622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.457 [2024-10-07 11:31:37.953653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.457 [2024-10-07 11:31:37.953671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.457 [2024-10-07 11:31:37.954874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.457 [2024-10-07 11:31:37.955137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.457 [2024-10-07 11:31:37.955174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.457 [2024-10-07 11:31:37.955192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.457 [2024-10-07 11:31:37.956014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.457 [2024-10-07 11:31:37.963609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.457 [2024-10-07 11:31:37.963723] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.457 [2024-10-07 11:31:37.963755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.457 [2024-10-07 11:31:37.963772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.457 [2024-10-07 11:31:37.963803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.457 [2024-10-07 11:31:37.963835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.457 [2024-10-07 11:31:37.963852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.457 [2024-10-07 11:31:37.963866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.457 [2024-10-07 11:31:37.963896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.457 [2024-10-07 11:31:37.973958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.457 [2024-10-07 11:31:37.974163] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.457 [2024-10-07 11:31:37.974196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.457 [2024-10-07 11:31:37.974214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.457 [2024-10-07 11:31:37.974268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.457 [2024-10-07 11:31:37.974334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.457 [2024-10-07 11:31:37.974356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.457 [2024-10-07 11:31:37.974371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.457 [2024-10-07 11:31:37.974402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.457 [2024-10-07 11:31:37.984556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.457 [2024-10-07 11:31:37.984682] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.457 [2024-10-07 11:31:37.984714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.457 [2024-10-07 11:31:37.984731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.457 [2024-10-07 11:31:37.984763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.457 [2024-10-07 11:31:37.984795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.457 [2024-10-07 11:31:37.984813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.457 [2024-10-07 11:31:37.984828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.457 [2024-10-07 11:31:37.984857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.457 [2024-10-07 11:31:37.994661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.457 [2024-10-07 11:31:37.994778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.457 [2024-10-07 11:31:37.994810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.457 [2024-10-07 11:31:37.994827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.457 [2024-10-07 11:31:37.996031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.457 [2024-10-07 11:31:37.996278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.457 [2024-10-07 11:31:37.996327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.457 [2024-10-07 11:31:37.996347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.457 [2024-10-07 11:31:37.997151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.457 [2024-10-07 11:31:38.004749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.457 [2024-10-07 11:31:38.004863] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.457 [2024-10-07 11:31:38.004895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.457 [2024-10-07 11:31:38.004912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.457 [2024-10-07 11:31:38.004944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.457 [2024-10-07 11:31:38.004975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.457 [2024-10-07 11:31:38.004994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.457 [2024-10-07 11:31:38.005008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.457 [2024-10-07 11:31:38.005038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.457 [2024-10-07 11:31:38.014839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.457 [2024-10-07 11:31:38.015114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.457 [2024-10-07 11:31:38.015157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.457 [2024-10-07 11:31:38.015177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.457 [2024-10-07 11:31:38.015345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.457 [2024-10-07 11:31:38.015484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.457 [2024-10-07 11:31:38.015518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.457 [2024-10-07 11:31:38.015535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.457 [2024-10-07 11:31:38.015591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.457 [2024-10-07 11:31:38.025814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.457 [2024-10-07 11:31:38.025933] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.457 [2024-10-07 11:31:38.025964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.457 [2024-10-07 11:31:38.025981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.457 [2024-10-07 11:31:38.026019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.457 [2024-10-07 11:31:38.026051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.457 [2024-10-07 11:31:38.026069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.457 [2024-10-07 11:31:38.026084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.457 [2024-10-07 11:31:38.026114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.457 [2024-10-07 11:31:38.035906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.457 [2024-10-07 11:31:38.036032] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.457 [2024-10-07 11:31:38.036064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.457 [2024-10-07 11:31:38.036081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.457 [2024-10-07 11:31:38.036113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.457 [2024-10-07 11:31:38.036145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.457 [2024-10-07 11:31:38.036163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.457 [2024-10-07 11:31:38.036177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.457 [2024-10-07 11:31:38.037374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.457 [2024-10-07 11:31:38.046089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.457 [2024-10-07 11:31:38.046202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.457 [2024-10-07 11:31:38.046234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.457 [2024-10-07 11:31:38.046250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.457 [2024-10-07 11:31:38.046282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.457 [2024-10-07 11:31:38.046343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.457 [2024-10-07 11:31:38.046364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.457 [2024-10-07 11:31:38.046406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.457 [2024-10-07 11:31:38.046438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.457 [2024-10-07 11:31:38.056180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.457 [2024-10-07 11:31:38.056465] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.457 [2024-10-07 11:31:38.056509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.458 [2024-10-07 11:31:38.056529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.458 [2024-10-07 11:31:38.056662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.458 [2024-10-07 11:31:38.056788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.458 [2024-10-07 11:31:38.056823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.458 [2024-10-07 11:31:38.056839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.458 [2024-10-07 11:31:38.056897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.458 [2024-10-07 11:31:38.067167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.458 [2024-10-07 11:31:38.067283] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.458 [2024-10-07 11:31:38.067330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.458 [2024-10-07 11:31:38.067351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.458 [2024-10-07 11:31:38.067391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.458 [2024-10-07 11:31:38.067422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.458 [2024-10-07 11:31:38.067440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.458 [2024-10-07 11:31:38.067454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.458 [2024-10-07 11:31:38.067484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.458 [2024-10-07 11:31:38.077261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.458 [2024-10-07 11:31:38.077394] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.458 [2024-10-07 11:31:38.077427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.458 [2024-10-07 11:31:38.077445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.458 [2024-10-07 11:31:38.077476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.458 [2024-10-07 11:31:38.078691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.458 [2024-10-07 11:31:38.078731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.458 [2024-10-07 11:31:38.078750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.458 [2024-10-07 11:31:38.078976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.458 [2024-10-07 11:31:38.087398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.458 [2024-10-07 11:31:38.087514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.458 [2024-10-07 11:31:38.087567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.458 [2024-10-07 11:31:38.087586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.458 [2024-10-07 11:31:38.087618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.458 [2024-10-07 11:31:38.087651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.458 [2024-10-07 11:31:38.087669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.458 [2024-10-07 11:31:38.087683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.458 [2024-10-07 11:31:38.087713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.458 [2024-10-07 11:31:38.097735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.458 [2024-10-07 11:31:38.097940] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.458 [2024-10-07 11:31:38.097974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.458 [2024-10-07 11:31:38.097991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.458 [2024-10-07 11:31:38.098046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.458 [2024-10-07 11:31:38.098083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.458 [2024-10-07 11:31:38.098101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.458 [2024-10-07 11:31:38.098115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.458 [2024-10-07 11:31:38.098146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.458 [2024-10-07 11:31:38.108440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.458 [2024-10-07 11:31:38.108561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.458 [2024-10-07 11:31:38.108593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.458 [2024-10-07 11:31:38.108611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.458 [2024-10-07 11:31:38.108642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.458 [2024-10-07 11:31:38.108675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.458 [2024-10-07 11:31:38.108693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.458 [2024-10-07 11:31:38.108708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.458 [2024-10-07 11:31:38.108738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.458 [2024-10-07 11:31:38.118536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.458 [2024-10-07 11:31:38.118651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.458 [2024-10-07 11:31:38.118682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.458 [2024-10-07 11:31:38.118700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.458 [2024-10-07 11:31:38.118732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.458 [2024-10-07 11:31:38.119943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.458 [2024-10-07 11:31:38.119982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.458 [2024-10-07 11:31:38.120000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.458 [2024-10-07 11:31:38.120241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.458 [2024-10-07 11:31:38.128739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.458 [2024-10-07 11:31:38.128856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.458 [2024-10-07 11:31:38.128888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.458 [2024-10-07 11:31:38.128905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.458 [2024-10-07 11:31:38.128937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.458 [2024-10-07 11:31:38.128971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.458 [2024-10-07 11:31:38.128988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.458 [2024-10-07 11:31:38.129002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.458 [2024-10-07 11:31:38.129033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.458 [2024-10-07 11:31:38.138832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.458 [2024-10-07 11:31:38.139101] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.458 [2024-10-07 11:31:38.139145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.458 [2024-10-07 11:31:38.139165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.458 [2024-10-07 11:31:38.139297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.458 [2024-10-07 11:31:38.139443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.458 [2024-10-07 11:31:38.139469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.458 [2024-10-07 11:31:38.139484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.458 [2024-10-07 11:31:38.139540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.458 [2024-10-07 11:31:38.149795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.458 [2024-10-07 11:31:38.149911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.458 [2024-10-07 11:31:38.149943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.458 [2024-10-07 11:31:38.149960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.458 [2024-10-07 11:31:38.149992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.458 [2024-10-07 11:31:38.150024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.458 [2024-10-07 11:31:38.150041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.458 [2024-10-07 11:31:38.150056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.458 [2024-10-07 11:31:38.150103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.458 [2024-10-07 11:31:38.159886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.458 [2024-10-07 11:31:38.160002] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.458 [2024-10-07 11:31:38.160033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.458 [2024-10-07 11:31:38.160051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.458 [2024-10-07 11:31:38.160082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.458 [2024-10-07 11:31:38.160114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.458 [2024-10-07 11:31:38.160131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.458 [2024-10-07 11:31:38.160146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.458 [2024-10-07 11:31:38.160176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.458 [2024-10-07 11:31:38.170120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.458 [2024-10-07 11:31:38.170237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.458 [2024-10-07 11:31:38.170269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.458 [2024-10-07 11:31:38.170297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.459 [2024-10-07 11:31:38.170345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.459 [2024-10-07 11:31:38.170382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.459 [2024-10-07 11:31:38.170401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.459 [2024-10-07 11:31:38.170415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.459 [2024-10-07 11:31:38.170460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.459 [2024-10-07 11:31:38.180215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.459 [2024-10-07 11:31:38.180345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.459 [2024-10-07 11:31:38.180376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.459 [2024-10-07 11:31:38.180393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.459 [2024-10-07 11:31:38.180579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.459 [2024-10-07 11:31:38.180739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.459 [2024-10-07 11:31:38.180772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.459 [2024-10-07 11:31:38.180790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.459 [2024-10-07 11:31:38.180909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.459 [2024-10-07 11:31:38.191292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.459 [2024-10-07 11:31:38.191429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.459 [2024-10-07 11:31:38.191461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.459 [2024-10-07 11:31:38.191510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.459 [2024-10-07 11:31:38.191545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.459 [2024-10-07 11:31:38.191577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.459 [2024-10-07 11:31:38.191595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.459 [2024-10-07 11:31:38.191610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.459 [2024-10-07 11:31:38.191640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.459 [2024-10-07 11:31:38.201398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.459 [2024-10-07 11:31:38.201518] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.459 [2024-10-07 11:31:38.201549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.459 [2024-10-07 11:31:38.201567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.459 [2024-10-07 11:31:38.201599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.459 [2024-10-07 11:31:38.201630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.459 [2024-10-07 11:31:38.201648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.459 [2024-10-07 11:31:38.201662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.459 [2024-10-07 11:31:38.201692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.459 [2024-10-07 11:31:38.211686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.459 [2024-10-07 11:31:38.211805] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.459 [2024-10-07 11:31:38.211836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.459 [2024-10-07 11:31:38.211853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.459 [2024-10-07 11:31:38.211885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.459 [2024-10-07 11:31:38.211917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.459 [2024-10-07 11:31:38.211935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.459 [2024-10-07 11:31:38.211949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.459 [2024-10-07 11:31:38.211978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.459 [2024-10-07 11:31:38.221783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.459 [2024-10-07 11:31:38.221911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.459 [2024-10-07 11:31:38.221943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.459 [2024-10-07 11:31:38.221960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.459 [2024-10-07 11:31:38.222155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.459 [2024-10-07 11:31:38.222311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.459 [2024-10-07 11:31:38.222376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.459 [2024-10-07 11:31:38.222396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.459 [2024-10-07 11:31:38.222519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.459 [2024-10-07 11:31:38.233067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.459 [2024-10-07 11:31:38.233225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.459 [2024-10-07 11:31:38.233260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.459 [2024-10-07 11:31:38.233279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.459 [2024-10-07 11:31:38.233313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.459 [2024-10-07 11:31:38.233363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.459 [2024-10-07 11:31:38.233382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.459 [2024-10-07 11:31:38.233398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.459 [2024-10-07 11:31:38.233428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.459 [2024-10-07 11:31:38.243227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.459 [2024-10-07 11:31:38.243428] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.459 [2024-10-07 11:31:38.243478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.459 [2024-10-07 11:31:38.243510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.459 [2024-10-07 11:31:38.245213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.459 [2024-10-07 11:31:38.245612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.459 [2024-10-07 11:31:38.245675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.459 [2024-10-07 11:31:38.245708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.459 [2024-10-07 11:31:38.246684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.459 [2024-10-07 11:31:38.253772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.459 [2024-10-07 11:31:38.253943] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.459 [2024-10-07 11:31:38.253988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.459 [2024-10-07 11:31:38.254015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.459 [2024-10-07 11:31:38.254062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.459 [2024-10-07 11:31:38.255494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.459 [2024-10-07 11:31:38.255549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.459 [2024-10-07 11:31:38.255576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.459 [2024-10-07 11:31:38.255875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.459 [2024-10-07 11:31:38.264040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.459 [2024-10-07 11:31:38.264298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.459 [2024-10-07 11:31:38.264351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.459 [2024-10-07 11:31:38.264372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.459 [2024-10-07 11:31:38.264496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.459 [2024-10-07 11:31:38.264562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.459 [2024-10-07 11:31:38.264585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.459 [2024-10-07 11:31:38.264600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.459 [2024-10-07 11:31:38.264633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.459 [2024-10-07 11:31:38.274938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.459 [2024-10-07 11:31:38.275061] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.459 [2024-10-07 11:31:38.275093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.459 [2024-10-07 11:31:38.275111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.459 [2024-10-07 11:31:38.275143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.459 [2024-10-07 11:31:38.275175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.459 [2024-10-07 11:31:38.275194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.460 [2024-10-07 11:31:38.275208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.460 [2024-10-07 11:31:38.275238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.460 [2024-10-07 11:31:38.285035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.460 [2024-10-07 11:31:38.285153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.460 [2024-10-07 11:31:38.285185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.460 [2024-10-07 11:31:38.285202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.460 [2024-10-07 11:31:38.285234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.460 [2024-10-07 11:31:38.285266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.460 [2024-10-07 11:31:38.285284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.460 [2024-10-07 11:31:38.285298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.460 [2024-10-07 11:31:38.285344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.460 [2024-10-07 11:31:38.295572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.460 [2024-10-07 11:31:38.295700] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.460 [2024-10-07 11:31:38.295732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.460 [2024-10-07 11:31:38.295750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.460 [2024-10-07 11:31:38.295812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.460 [2024-10-07 11:31:38.295848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.460 [2024-10-07 11:31:38.295867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.460 [2024-10-07 11:31:38.295881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.460 [2024-10-07 11:31:38.295912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.460 [2024-10-07 11:31:38.305667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.460 [2024-10-07 11:31:38.305791] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.460 [2024-10-07 11:31:38.305823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.460 [2024-10-07 11:31:38.305842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.460 [2024-10-07 11:31:38.306032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.460 [2024-10-07 11:31:38.306186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.460 [2024-10-07 11:31:38.306220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.460 [2024-10-07 11:31:38.306236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.460 [2024-10-07 11:31:38.306385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.460 [2024-10-07 11:31:38.316883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.460 [2024-10-07 11:31:38.317001] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.460 [2024-10-07 11:31:38.317033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.460 [2024-10-07 11:31:38.317050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.460 [2024-10-07 11:31:38.317082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.460 [2024-10-07 11:31:38.317114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.460 [2024-10-07 11:31:38.317132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.460 [2024-10-07 11:31:38.317146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.460 [2024-10-07 11:31:38.317176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.460 [2024-10-07 11:31:38.326975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.460 [2024-10-07 11:31:38.327092] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.460 [2024-10-07 11:31:38.327123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.460 [2024-10-07 11:31:38.327140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.460 [2024-10-07 11:31:38.328339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.460 [2024-10-07 11:31:38.328583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.460 [2024-10-07 11:31:38.328620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.460 [2024-10-07 11:31:38.328652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.460 [2024-10-07 11:31:38.329479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.460 [2024-10-07 11:31:38.337069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.460 [2024-10-07 11:31:38.337187] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.460 [2024-10-07 11:31:38.337223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.460 [2024-10-07 11:31:38.337240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.460 [2024-10-07 11:31:38.337272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.460 [2024-10-07 11:31:38.337304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.460 [2024-10-07 11:31:38.337339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.460 [2024-10-07 11:31:38.337355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.460 [2024-10-07 11:31:38.337386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.460 [2024-10-07 11:31:38.347310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.460 [2024-10-07 11:31:38.347540] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.460 [2024-10-07 11:31:38.347575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.460 [2024-10-07 11:31:38.347593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.460 [2024-10-07 11:31:38.347713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.460 [2024-10-07 11:31:38.347776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.460 [2024-10-07 11:31:38.347799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.460 [2024-10-07 11:31:38.347814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.460 [2024-10-07 11:31:38.347856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.460 [2024-10-07 11:31:38.358139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.460 [2024-10-07 11:31:38.358258] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.460 [2024-10-07 11:31:38.358300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.460 [2024-10-07 11:31:38.358334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.460 [2024-10-07 11:31:38.358370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.460 [2024-10-07 11:31:38.358406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.460 [2024-10-07 11:31:38.358424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.460 [2024-10-07 11:31:38.358438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.460 [2024-10-07 11:31:38.358468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.460 [2024-10-07 11:31:38.368233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.460 [2024-10-07 11:31:38.368368] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.460 [2024-10-07 11:31:38.368416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.460 [2024-10-07 11:31:38.368436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.460 [2024-10-07 11:31:38.369640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.460 [2024-10-07 11:31:38.369900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.460 [2024-10-07 11:31:38.369934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.460 [2024-10-07 11:31:38.369956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.460 [2024-10-07 11:31:38.370793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.460 [2024-10-07 11:31:38.378439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.460 [2024-10-07 11:31:38.378556] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.460 [2024-10-07 11:31:38.378588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.460 [2024-10-07 11:31:38.378605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.460 [2024-10-07 11:31:38.378636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.460 [2024-10-07 11:31:38.378669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.460 [2024-10-07 11:31:38.378687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.460 [2024-10-07 11:31:38.378712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.460 [2024-10-07 11:31:38.378743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.460 [2024-10-07 11:31:38.388742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.460 [2024-10-07 11:31:38.388857] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.460 [2024-10-07 11:31:38.388889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.460 [2024-10-07 11:31:38.388906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.460 [2024-10-07 11:31:38.388944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.460 [2024-10-07 11:31:38.388976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.460 [2024-10-07 11:31:38.388994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.460 [2024-10-07 11:31:38.389008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.460 [2024-10-07 11:31:38.389037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.460 [2024-10-07 11:31:38.400409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.460 [2024-10-07 11:31:38.400525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.460 [2024-10-07 11:31:38.400557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.460 [2024-10-07 11:31:38.400575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.460 [2024-10-07 11:31:38.400606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.461 [2024-10-07 11:31:38.400657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.461 [2024-10-07 11:31:38.400677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.461 [2024-10-07 11:31:38.400691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.461 [2024-10-07 11:31:38.400722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.461 [2024-10-07 11:31:38.410500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.461 [2024-10-07 11:31:38.410622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.461 [2024-10-07 11:31:38.410663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.461 [2024-10-07 11:31:38.410680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.461 [2024-10-07 11:31:38.410712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.461 [2024-10-07 11:31:38.410744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.461 [2024-10-07 11:31:38.410762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.461 [2024-10-07 11:31:38.410776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.461 [2024-10-07 11:31:38.410805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.461 [2024-10-07 11:31:38.420814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.461 [2024-10-07 11:31:38.420931] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.461 [2024-10-07 11:31:38.420962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.461 [2024-10-07 11:31:38.420980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.461 [2024-10-07 11:31:38.421011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.461 [2024-10-07 11:31:38.421043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.461 [2024-10-07 11:31:38.421061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.461 [2024-10-07 11:31:38.421075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.461 [2024-10-07 11:31:38.421104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.461 [2024-10-07 11:31:38.431125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.461 [2024-10-07 11:31:38.431243] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.461 [2024-10-07 11:31:38.431275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.461 [2024-10-07 11:31:38.431293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.461 [2024-10-07 11:31:38.431339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.461 [2024-10-07 11:31:38.431374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.461 [2024-10-07 11:31:38.431392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.461 [2024-10-07 11:31:38.431406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.461 [2024-10-07 11:31:38.431455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.461 [2024-10-07 11:31:38.442796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.461 [2024-10-07 11:31:38.442915] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.461 [2024-10-07 11:31:38.442947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.461 [2024-10-07 11:31:38.442965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.461 [2024-10-07 11:31:38.442996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.461 [2024-10-07 11:31:38.443028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.461 [2024-10-07 11:31:38.443047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.461 [2024-10-07 11:31:38.443061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.461 [2024-10-07 11:31:38.443091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.461 [2024-10-07 11:31:38.452889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.461 [2024-10-07 11:31:38.453006] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.461 [2024-10-07 11:31:38.453039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.461 [2024-10-07 11:31:38.453057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.461 [2024-10-07 11:31:38.453088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.461 [2024-10-07 11:31:38.454276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.461 [2024-10-07 11:31:38.454341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.461 [2024-10-07 11:31:38.454361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.461 [2024-10-07 11:31:38.454595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.461 [2024-10-07 11:31:38.463070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.461 [2024-10-07 11:31:38.463186] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.461 [2024-10-07 11:31:38.463218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.461 [2024-10-07 11:31:38.463235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.461 [2024-10-07 11:31:38.463267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.461 [2024-10-07 11:31:38.463299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.461 [2024-10-07 11:31:38.463330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.461 [2024-10-07 11:31:38.463348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.461 [2024-10-07 11:31:38.463380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.461 [2024-10-07 11:31:38.473505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.461 [2024-10-07 11:31:38.473645] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.461 [2024-10-07 11:31:38.473678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.461 [2024-10-07 11:31:38.473711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.461 [2024-10-07 11:31:38.473746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.461 [2024-10-07 11:31:38.473782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.461 [2024-10-07 11:31:38.473799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.461 [2024-10-07 11:31:38.473813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.461 [2024-10-07 11:31:38.473844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.461 [2024-10-07 11:31:38.484067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.461 [2024-10-07 11:31:38.484187] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.461 [2024-10-07 11:31:38.484218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.461 [2024-10-07 11:31:38.484235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.461 [2024-10-07 11:31:38.484267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.461 [2024-10-07 11:31:38.484299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.461 [2024-10-07 11:31:38.484334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.461 [2024-10-07 11:31:38.484352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.461 [2024-10-07 11:31:38.484383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.461 [2024-10-07 11:31:38.494163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.461 [2024-10-07 11:31:38.494278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.461 [2024-10-07 11:31:38.494334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.461 [2024-10-07 11:31:38.494354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.461 [2024-10-07 11:31:38.494399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.461 [2024-10-07 11:31:38.495595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.461 [2024-10-07 11:31:38.495633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.461 [2024-10-07 11:31:38.495651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.461 [2024-10-07 11:31:38.495862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.461 7954.50 IOPS, 31.07 MiB/s [2024-10-07T11:31:52.984Z] [2024-10-07 11:31:38.504336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.461 [2024-10-07 11:31:38.504453] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.461 [2024-10-07 11:31:38.504485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.461 [2024-10-07 11:31:38.504503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.461 [2024-10-07 11:31:38.504534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.461 [2024-10-07 11:31:38.504566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.461 [2024-10-07 11:31:38.504600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.461 [2024-10-07 11:31:38.504615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.461 [2024-10-07 11:31:38.504647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.461 [2024-10-07 11:31:38.514425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.461 [2024-10-07 11:31:38.514695] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.461 [2024-10-07 11:31:38.514743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.461 [2024-10-07 11:31:38.514762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.461 [2024-10-07 11:31:38.514896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.461 [2024-10-07 11:31:38.515020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.461 [2024-10-07 11:31:38.515051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.461 [2024-10-07 11:31:38.515068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.461 [2024-10-07 11:31:38.515123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.461 [2024-10-07 11:31:38.525361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.461 [2024-10-07 11:31:38.525479] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.462 [2024-10-07 11:31:38.525510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.462 [2024-10-07 11:31:38.525528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.462 [2024-10-07 11:31:38.525559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.462 [2024-10-07 11:31:38.525593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.462 [2024-10-07 11:31:38.525611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.462 [2024-10-07 11:31:38.525625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.462 [2024-10-07 11:31:38.525655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.462 [2024-10-07 11:31:38.535455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.462 [2024-10-07 11:31:38.535570] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.462 [2024-10-07 11:31:38.535602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.462 [2024-10-07 11:31:38.535620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.462 [2024-10-07 11:31:38.536808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.462 [2024-10-07 11:31:38.537057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.462 [2024-10-07 11:31:38.537094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.462 [2024-10-07 11:31:38.537111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.462 [2024-10-07 11:31:38.537929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.462 [2024-10-07 11:31:38.545551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.462 [2024-10-07 11:31:38.545665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.462 [2024-10-07 11:31:38.545697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.462 [2024-10-07 11:31:38.545714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.462 [2024-10-07 11:31:38.545746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.462 [2024-10-07 11:31:38.545777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.462 [2024-10-07 11:31:38.545796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.462 [2024-10-07 11:31:38.545810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.462 [2024-10-07 11:31:38.545840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.462 [2024-10-07 11:31:38.555963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.462 [2024-10-07 11:31:38.556101] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.462 [2024-10-07 11:31:38.556133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.462 [2024-10-07 11:31:38.556151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.462 [2024-10-07 11:31:38.556182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.462 [2024-10-07 11:31:38.556214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.462 [2024-10-07 11:31:38.556232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.462 [2024-10-07 11:31:38.556246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.462 [2024-10-07 11:31:38.556275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.462 [2024-10-07 11:31:38.566488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.462 [2024-10-07 11:31:38.566603] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.462 [2024-10-07 11:31:38.566634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.462 [2024-10-07 11:31:38.566651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.462 [2024-10-07 11:31:38.566683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.462 [2024-10-07 11:31:38.566714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.462 [2024-10-07 11:31:38.566733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.462 [2024-10-07 11:31:38.566747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.462 [2024-10-07 11:31:38.566777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.462 [2024-10-07 11:31:38.576578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.462 [2024-10-07 11:31:38.576693] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.462 [2024-10-07 11:31:38.576724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.462 [2024-10-07 11:31:38.576742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.462 [2024-10-07 11:31:38.577947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.462 [2024-10-07 11:31:38.578183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.462 [2024-10-07 11:31:38.578227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.462 [2024-10-07 11:31:38.578244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.462 [2024-10-07 11:31:38.579074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.462 [2024-10-07 11:31:38.586681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.462 [2024-10-07 11:31:38.586795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.462 [2024-10-07 11:31:38.586826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.462 [2024-10-07 11:31:38.586847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.462 [2024-10-07 11:31:38.586878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.462 [2024-10-07 11:31:38.586910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.462 [2024-10-07 11:31:38.586929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.462 [2024-10-07 11:31:38.586943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.462 [2024-10-07 11:31:38.586972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.462 [2024-10-07 11:31:38.597015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.462 [2024-10-07 11:31:38.597160] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.462 [2024-10-07 11:31:38.597193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.462 [2024-10-07 11:31:38.597211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.462 [2024-10-07 11:31:38.597243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.462 [2024-10-07 11:31:38.597288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.462 [2024-10-07 11:31:38.597305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.462 [2024-10-07 11:31:38.597335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.462 [2024-10-07 11:31:38.597369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.462 [2024-10-07 11:31:38.607539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.462 [2024-10-07 11:31:38.607656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.462 [2024-10-07 11:31:38.607687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.462 [2024-10-07 11:31:38.607705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.462 [2024-10-07 11:31:38.607738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.462 [2024-10-07 11:31:38.607770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.462 [2024-10-07 11:31:38.607788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.462 [2024-10-07 11:31:38.607817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.462 [2024-10-07 11:31:38.607849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.462 [2024-10-07 11:31:38.617632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.462 [2024-10-07 11:31:38.617752] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.462 [2024-10-07 11:31:38.617783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.462 [2024-10-07 11:31:38.617800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.462 [2024-10-07 11:31:38.619014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.462 [2024-10-07 11:31:38.619251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.462 [2024-10-07 11:31:38.619286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.462 [2024-10-07 11:31:38.619304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.462 [2024-10-07 11:31:38.620127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.462 [2024-10-07 11:31:38.627727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.462 [2024-10-07 11:31:38.627842] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.462 [2024-10-07 11:31:38.627873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.462 [2024-10-07 11:31:38.627890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.462 [2024-10-07 11:31:38.627922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.462 [2024-10-07 11:31:38.627954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.462 [2024-10-07 11:31:38.627972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.462 [2024-10-07 11:31:38.627986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.462 [2024-10-07 11:31:38.628016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.462 [2024-10-07 11:31:38.638106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.462 [2024-10-07 11:31:38.638246] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.462 [2024-10-07 11:31:38.638279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.463 [2024-10-07 11:31:38.638311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.463 [2024-10-07 11:31:38.638360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.463 [2024-10-07 11:31:38.638392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.463 [2024-10-07 11:31:38.638410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.463 [2024-10-07 11:31:38.638424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.463 [2024-10-07 11:31:38.638455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.463 [2024-10-07 11:31:38.648636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.463 [2024-10-07 11:31:38.648767] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.463 [2024-10-07 11:31:38.648798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.463 [2024-10-07 11:31:38.648815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.463 [2024-10-07 11:31:38.648849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.463 [2024-10-07 11:31:38.648880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.463 [2024-10-07 11:31:38.648898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.463 [2024-10-07 11:31:38.648912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.463 [2024-10-07 11:31:38.648942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.463 [2024-10-07 11:31:38.658739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.463 [2024-10-07 11:31:38.658855] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.463 [2024-10-07 11:31:38.658886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.463 [2024-10-07 11:31:38.658904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.463 [2024-10-07 11:31:38.660091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.463 [2024-10-07 11:31:38.660349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.463 [2024-10-07 11:31:38.660381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.463 [2024-10-07 11:31:38.660398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.463 [2024-10-07 11:31:38.661201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.463 [2024-10-07 11:31:38.668830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.463 [2024-10-07 11:31:38.668950] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.463 [2024-10-07 11:31:38.668982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.463 [2024-10-07 11:31:38.669000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.463 [2024-10-07 11:31:38.669032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.463 [2024-10-07 11:31:38.669064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.463 [2024-10-07 11:31:38.669082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.463 [2024-10-07 11:31:38.669096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.463 [2024-10-07 11:31:38.669126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.463 [2024-10-07 11:31:38.679070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.463 [2024-10-07 11:31:38.679285] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.463 [2024-10-07 11:31:38.679330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.463 [2024-10-07 11:31:38.679351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.463 [2024-10-07 11:31:38.679472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.463 [2024-10-07 11:31:38.679553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.463 [2024-10-07 11:31:38.679578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.463 [2024-10-07 11:31:38.679592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.463 [2024-10-07 11:31:38.679623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.463 [2024-10-07 11:31:38.689872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.463 [2024-10-07 11:31:38.689990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.463 [2024-10-07 11:31:38.690021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.463 [2024-10-07 11:31:38.690038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.463 [2024-10-07 11:31:38.690070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.463 [2024-10-07 11:31:38.690102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.463 [2024-10-07 11:31:38.690120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.463 [2024-10-07 11:31:38.690134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.463 [2024-10-07 11:31:38.690164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.463 [2024-10-07 11:31:38.699965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.463 [2024-10-07 11:31:38.700085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.463 [2024-10-07 11:31:38.700117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.463 [2024-10-07 11:31:38.700134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.463 [2024-10-07 11:31:38.701343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.463 [2024-10-07 11:31:38.701575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.463 [2024-10-07 11:31:38.701623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.463 [2024-10-07 11:31:38.701640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.463 [2024-10-07 11:31:38.702471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.463 [2024-10-07 11:31:38.710061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.463 [2024-10-07 11:31:38.710179] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.463 [2024-10-07 11:31:38.710211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.463 [2024-10-07 11:31:38.710229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.463 [2024-10-07 11:31:38.710266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.463 [2024-10-07 11:31:38.710313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.463 [2024-10-07 11:31:38.710348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.463 [2024-10-07 11:31:38.710363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.463 [2024-10-07 11:31:38.710411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.463 [2024-10-07 11:31:38.720384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.463 [2024-10-07 11:31:38.720524] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.463 [2024-10-07 11:31:38.720557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.463 [2024-10-07 11:31:38.720575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.463 [2024-10-07 11:31:38.720607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.463 [2024-10-07 11:31:38.720639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.463 [2024-10-07 11:31:38.720657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.463 [2024-10-07 11:31:38.720670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.463 [2024-10-07 11:31:38.720701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.463 [2024-10-07 11:31:38.730887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.463 [2024-10-07 11:31:38.731009] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.463 [2024-10-07 11:31:38.731041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.463 [2024-10-07 11:31:38.731058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.463 [2024-10-07 11:31:38.731089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.463 [2024-10-07 11:31:38.731121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.463 [2024-10-07 11:31:38.731139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.463 [2024-10-07 11:31:38.731154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.463 [2024-10-07 11:31:38.731183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.463 [2024-10-07 11:31:38.740981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.463 [2024-10-07 11:31:38.742254] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.463 [2024-10-07 11:31:38.742307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.463 [2024-10-07 11:31:38.742341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.463 [2024-10-07 11:31:38.742566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.463 [2024-10-07 11:31:38.743414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.463 [2024-10-07 11:31:38.743449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.463 [2024-10-07 11:31:38.743467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.463 [2024-10-07 11:31:38.744657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.463 [2024-10-07 11:31:38.751071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.463 [2024-10-07 11:31:38.751184] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.463 [2024-10-07 11:31:38.751216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.463 [2024-10-07 11:31:38.751247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.463 [2024-10-07 11:31:38.751281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.463 [2024-10-07 11:31:38.751314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.463 [2024-10-07 11:31:38.751348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.463 [2024-10-07 11:31:38.751362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.463 [2024-10-07 11:31:38.751394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.463 [2024-10-07 11:31:38.761338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.464 [2024-10-07 11:31:38.761454] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.464 [2024-10-07 11:31:38.761485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.464 [2024-10-07 11:31:38.761503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.464 [2024-10-07 11:31:38.761534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.464 [2024-10-07 11:31:38.761566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.464 [2024-10-07 11:31:38.761594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.464 [2024-10-07 11:31:38.761608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.464 [2024-10-07 11:31:38.761638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.464 [2024-10-07 11:31:38.771792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.464 [2024-10-07 11:31:38.771912] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.464 [2024-10-07 11:31:38.771943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.464 [2024-10-07 11:31:38.771960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.464 [2024-10-07 11:31:38.771992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.464 [2024-10-07 11:31:38.772025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.464 [2024-10-07 11:31:38.772043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.464 [2024-10-07 11:31:38.772065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.464 [2024-10-07 11:31:38.772096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.464 [2024-10-07 11:31:38.781885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.464 [2024-10-07 11:31:38.783163] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.464 [2024-10-07 11:31:38.783210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.464 [2024-10-07 11:31:38.783230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.464 [2024-10-07 11:31:38.783457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.464 [2024-10-07 11:31:38.784268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.464 [2024-10-07 11:31:38.784333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.464 [2024-10-07 11:31:38.784353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.464 [2024-10-07 11:31:38.785563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.464 [2024-10-07 11:31:38.791975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.464 [2024-10-07 11:31:38.792091] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.464 [2024-10-07 11:31:38.792123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.464 [2024-10-07 11:31:38.792140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.464 [2024-10-07 11:31:38.792183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.464 [2024-10-07 11:31:38.792215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.464 [2024-10-07 11:31:38.792233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.464 [2024-10-07 11:31:38.792247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.464 [2024-10-07 11:31:38.792277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.464 [2024-10-07 11:31:38.802250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.464 [2024-10-07 11:31:38.802418] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.464 [2024-10-07 11:31:38.802452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.464 [2024-10-07 11:31:38.802470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.464 [2024-10-07 11:31:38.802502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.464 [2024-10-07 11:31:38.802535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.464 [2024-10-07 11:31:38.802552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.464 [2024-10-07 11:31:38.802566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.464 [2024-10-07 11:31:38.802597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.464 [2024-10-07 11:31:38.812791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.464 [2024-10-07 11:31:38.812911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.464 [2024-10-07 11:31:38.812943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.464 [2024-10-07 11:31:38.812961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.464 [2024-10-07 11:31:38.812993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.464 [2024-10-07 11:31:38.813024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.464 [2024-10-07 11:31:38.813043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.464 [2024-10-07 11:31:38.813057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.464 [2024-10-07 11:31:38.813087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.464 [2024-10-07 11:31:38.822885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.464 [2024-10-07 11:31:38.823012] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.464 [2024-10-07 11:31:38.823043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.464 [2024-10-07 11:31:38.823061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.464 [2024-10-07 11:31:38.824269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.464 [2024-10-07 11:31:38.824515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.464 [2024-10-07 11:31:38.824551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.464 [2024-10-07 11:31:38.824568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.464 [2024-10-07 11:31:38.825386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.464 [2024-10-07 11:31:38.832990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.464 [2024-10-07 11:31:38.833115] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.464 [2024-10-07 11:31:38.833147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.464 [2024-10-07 11:31:38.833164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.464 [2024-10-07 11:31:38.833196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.464 [2024-10-07 11:31:38.833227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.464 [2024-10-07 11:31:38.833246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.464 [2024-10-07 11:31:38.833260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.464 [2024-10-07 11:31:38.833289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.464 [2024-10-07 11:31:38.843092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.464 [2024-10-07 11:31:38.843386] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.464 [2024-10-07 11:31:38.843430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.464 [2024-10-07 11:31:38.843449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.464 [2024-10-07 11:31:38.843581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.464 [2024-10-07 11:31:38.843717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.464 [2024-10-07 11:31:38.843751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.464 [2024-10-07 11:31:38.843768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.464 [2024-10-07 11:31:38.843824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.464 [2024-10-07 11:31:38.854066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.464 [2024-10-07 11:31:38.854185] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.464 [2024-10-07 11:31:38.854218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.464 [2024-10-07 11:31:38.854235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.464 [2024-10-07 11:31:38.854297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.464 [2024-10-07 11:31:38.854349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.464 [2024-10-07 11:31:38.854369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.464 [2024-10-07 11:31:38.854385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.464 [2024-10-07 11:31:38.854415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.464 [2024-10-07 11:31:38.864156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.464 [2024-10-07 11:31:38.864273] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.464 [2024-10-07 11:31:38.864305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.464 [2024-10-07 11:31:38.864338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.464 [2024-10-07 11:31:38.864372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.464 [2024-10-07 11:31:38.864403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.464 [2024-10-07 11:31:38.864421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.464 [2024-10-07 11:31:38.864435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.464 [2024-10-07 11:31:38.865619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.464 [2024-10-07 11:31:38.874402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.464 [2024-10-07 11:31:38.874517] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.464 [2024-10-07 11:31:38.874548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.464 [2024-10-07 11:31:38.874566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.464 [2024-10-07 11:31:38.874598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.464 [2024-10-07 11:31:38.874630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.464 [2024-10-07 11:31:38.874648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.464 [2024-10-07 11:31:38.874662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.464 [2024-10-07 11:31:38.874692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.464 [2024-10-07 11:31:38.884493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.464 [2024-10-07 11:31:38.884761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.464 [2024-10-07 11:31:38.884805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.464 [2024-10-07 11:31:38.884824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.464 [2024-10-07 11:31:38.884957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.464 [2024-10-07 11:31:38.885082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.465 [2024-10-07 11:31:38.885117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.465 [2024-10-07 11:31:38.885149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.465 [2024-10-07 11:31:38.885208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.465 [2024-10-07 11:31:38.895451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.465 [2024-10-07 11:31:38.895568] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.465 [2024-10-07 11:31:38.895600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.465 [2024-10-07 11:31:38.895617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.465 [2024-10-07 11:31:38.895649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.465 [2024-10-07 11:31:38.895681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.465 [2024-10-07 11:31:38.895699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.465 [2024-10-07 11:31:38.895718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.465 [2024-10-07 11:31:38.895748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.465 [2024-10-07 11:31:38.905716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.465 [2024-10-07 11:31:38.905842] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.465 [2024-10-07 11:31:38.905874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.465 [2024-10-07 11:31:38.905892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.465 [2024-10-07 11:31:38.905924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.465 [2024-10-07 11:31:38.905955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.465 [2024-10-07 11:31:38.905973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.465 [2024-10-07 11:31:38.905988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.465 [2024-10-07 11:31:38.906018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.465 [2024-10-07 11:31:38.916277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.465 [2024-10-07 11:31:38.916417] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.465 [2024-10-07 11:31:38.916451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.465 [2024-10-07 11:31:38.916468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.465 [2024-10-07 11:31:38.916502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.465 [2024-10-07 11:31:38.916548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.465 [2024-10-07 11:31:38.916569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.465 [2024-10-07 11:31:38.916585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.465 [2024-10-07 11:31:38.916615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.465 [2024-10-07 11:31:38.926383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.465 [2024-10-07 11:31:38.926518] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.465 [2024-10-07 11:31:38.926550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.465 [2024-10-07 11:31:38.926568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.465 [2024-10-07 11:31:38.926600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.465 [2024-10-07 11:31:38.926632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.465 [2024-10-07 11:31:38.926651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.465 [2024-10-07 11:31:38.926665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.465 [2024-10-07 11:31:38.926695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.465 [2024-10-07 11:31:38.937766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.465 [2024-10-07 11:31:38.937887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.465 [2024-10-07 11:31:38.937919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.465 [2024-10-07 11:31:38.937936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.465 [2024-10-07 11:31:38.937969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.465 [2024-10-07 11:31:38.938001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.465 [2024-10-07 11:31:38.938019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.465 [2024-10-07 11:31:38.938032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.465 [2024-10-07 11:31:38.938063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.465 [2024-10-07 11:31:38.947878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.465 [2024-10-07 11:31:38.947995] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.465 [2024-10-07 11:31:38.948026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.465 [2024-10-07 11:31:38.948043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.465 [2024-10-07 11:31:38.948075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.465 [2024-10-07 11:31:38.948107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.465 [2024-10-07 11:31:38.948124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.465 [2024-10-07 11:31:38.948138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.465 [2024-10-07 11:31:38.948168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.465 [2024-10-07 11:31:38.958712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.465 [2024-10-07 11:31:38.958834] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.465 [2024-10-07 11:31:38.958867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.465 [2024-10-07 11:31:38.958884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.465 [2024-10-07 11:31:38.958916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.465 [2024-10-07 11:31:38.958963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.465 [2024-10-07 11:31:38.958983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.465 [2024-10-07 11:31:38.958997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.465 [2024-10-07 11:31:38.959027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.465 [2024-10-07 11:31:38.968801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.465 [2024-10-07 11:31:38.968916] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.465 [2024-10-07 11:31:38.968948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.465 [2024-10-07 11:31:38.968965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.465 [2024-10-07 11:31:38.969003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.465 [2024-10-07 11:31:38.969035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.465 [2024-10-07 11:31:38.969053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.465 [2024-10-07 11:31:38.969066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.465 [2024-10-07 11:31:38.969096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.465 [2024-10-07 11:31:38.980151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.465 [2024-10-07 11:31:38.980266] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.465 [2024-10-07 11:31:38.980297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.465 [2024-10-07 11:31:38.980327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.465 [2024-10-07 11:31:38.980363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.465 [2024-10-07 11:31:38.980396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.465 [2024-10-07 11:31:38.980413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.465 [2024-10-07 11:31:38.980428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.465 [2024-10-07 11:31:38.980457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.465 [2024-10-07 11:31:38.990239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.465 [2024-10-07 11:31:38.990381] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.465 [2024-10-07 11:31:38.990413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.465 [2024-10-07 11:31:38.990431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.465 [2024-10-07 11:31:38.990463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.465 [2024-10-07 11:31:38.990495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.465 [2024-10-07 11:31:38.990512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.465 [2024-10-07 11:31:38.990526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.465 [2024-10-07 11:31:38.990573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.465 [2024-10-07 11:31:39.000908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.465 [2024-10-07 11:31:39.001033] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.465 [2024-10-07 11:31:39.001065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.465 [2024-10-07 11:31:39.001082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.465 [2024-10-07 11:31:39.001115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.465 [2024-10-07 11:31:39.001147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.465 [2024-10-07 11:31:39.001165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.465 [2024-10-07 11:31:39.001179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.465 [2024-10-07 11:31:39.001209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.465 [2024-10-07 11:31:39.011000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.465 [2024-10-07 11:31:39.011117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.465 [2024-10-07 11:31:39.011149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.465 [2024-10-07 11:31:39.011166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.465 [2024-10-07 11:31:39.011197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.465 [2024-10-07 11:31:39.011229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.465 [2024-10-07 11:31:39.011247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.465 [2024-10-07 11:31:39.011261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.465 [2024-10-07 11:31:39.011291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.465 [2024-10-07 11:31:39.022367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.465 [2024-10-07 11:31:39.022486] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.465 [2024-10-07 11:31:39.022518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.465 [2024-10-07 11:31:39.022536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.465 [2024-10-07 11:31:39.022568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.465 [2024-10-07 11:31:39.022600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.465 [2024-10-07 11:31:39.022618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.465 [2024-10-07 11:31:39.022633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.465 [2024-10-07 11:31:39.022662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.465 [2024-10-07 11:31:39.032462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.465 [2024-10-07 11:31:39.032576] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.466 [2024-10-07 11:31:39.032608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.466 [2024-10-07 11:31:39.032641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.466 [2024-10-07 11:31:39.032675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.466 [2024-10-07 11:31:39.032708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.466 [2024-10-07 11:31:39.032726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.466 [2024-10-07 11:31:39.032740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.466 [2024-10-07 11:31:39.032770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.466 [2024-10-07 11:31:39.043249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.466 [2024-10-07 11:31:39.043389] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.466 [2024-10-07 11:31:39.043422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.466 [2024-10-07 11:31:39.043439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.466 [2024-10-07 11:31:39.043471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.466 [2024-10-07 11:31:39.043504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.466 [2024-10-07 11:31:39.043522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.466 [2024-10-07 11:31:39.043536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.466 [2024-10-07 11:31:39.043567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.466 [2024-10-07 11:31:39.053353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.466 [2024-10-07 11:31:39.053469] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.466 [2024-10-07 11:31:39.053499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.466 [2024-10-07 11:31:39.053517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.466 [2024-10-07 11:31:39.053548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.466 [2024-10-07 11:31:39.053580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.466 [2024-10-07 11:31:39.053597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.466 [2024-10-07 11:31:39.053611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.466 [2024-10-07 11:31:39.053641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.466 [2024-10-07 11:31:39.064740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.466 [2024-10-07 11:31:39.064857] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.466 [2024-10-07 11:31:39.064888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.466 [2024-10-07 11:31:39.064916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.466 [2024-10-07 11:31:39.064947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.466 [2024-10-07 11:31:39.064980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.466 [2024-10-07 11:31:39.065016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.466 [2024-10-07 11:31:39.065031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.466 [2024-10-07 11:31:39.065063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.466 [2024-10-07 11:31:39.074828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.466 [2024-10-07 11:31:39.074943] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.466 [2024-10-07 11:31:39.074975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.466 [2024-10-07 11:31:39.074992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.466 [2024-10-07 11:31:39.075023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.466 [2024-10-07 11:31:39.075055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.466 [2024-10-07 11:31:39.075072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.466 [2024-10-07 11:31:39.075086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.466 [2024-10-07 11:31:39.076273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.466 [2024-10-07 11:31:39.085042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.466 [2024-10-07 11:31:39.085158] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.466 [2024-10-07 11:31:39.085190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.466 [2024-10-07 11:31:39.085207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.466 [2024-10-07 11:31:39.085238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.466 [2024-10-07 11:31:39.085270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.466 [2024-10-07 11:31:39.085288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.466 [2024-10-07 11:31:39.085302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.466 [2024-10-07 11:31:39.085347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.466 [2024-10-07 11:31:39.095131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.466 [2024-10-07 11:31:39.095248] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.466 [2024-10-07 11:31:39.095279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.466 [2024-10-07 11:31:39.095296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.466 [2024-10-07 11:31:39.095496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.466 [2024-10-07 11:31:39.095639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.466 [2024-10-07 11:31:39.095674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.466 [2024-10-07 11:31:39.095693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.466 [2024-10-07 11:31:39.095812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.466 [2024-10-07 11:31:39.106222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.466 [2024-10-07 11:31:39.106365] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.466 [2024-10-07 11:31:39.106398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.466 [2024-10-07 11:31:39.106415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.466 [2024-10-07 11:31:39.106448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.466 [2024-10-07 11:31:39.106480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.466 [2024-10-07 11:31:39.106498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.466 [2024-10-07 11:31:39.106512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.466 [2024-10-07 11:31:39.106543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.466 [2024-10-07 11:31:39.116330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.466 [2024-10-07 11:31:39.116448] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.466 [2024-10-07 11:31:39.116487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.466 [2024-10-07 11:31:39.116504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.466 [2024-10-07 11:31:39.116536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.466 [2024-10-07 11:31:39.117724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.466 [2024-10-07 11:31:39.117762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.466 [2024-10-07 11:31:39.117780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.466 [2024-10-07 11:31:39.117984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.466 [2024-10-07 11:31:39.126491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.466 [2024-10-07 11:31:39.126612] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.466 [2024-10-07 11:31:39.126644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.466 [2024-10-07 11:31:39.126662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.466 [2024-10-07 11:31:39.126694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.466 [2024-10-07 11:31:39.126726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.466 [2024-10-07 11:31:39.126744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.466 [2024-10-07 11:31:39.126758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.466 [2024-10-07 11:31:39.126788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.466 [2024-10-07 11:31:39.136853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.466 [2024-10-07 11:31:39.136992] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.466 [2024-10-07 11:31:39.137025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.466 [2024-10-07 11:31:39.137042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.466 [2024-10-07 11:31:39.137091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.466 [2024-10-07 11:31:39.137124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.466 [2024-10-07 11:31:39.137141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.466 [2024-10-07 11:31:39.137155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.466 [2024-10-07 11:31:39.137186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.466 [2024-10-07 11:31:39.147427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.466 [2024-10-07 11:31:39.147544] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.466 [2024-10-07 11:31:39.147575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.466 [2024-10-07 11:31:39.147592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.466 [2024-10-07 11:31:39.147624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.466 [2024-10-07 11:31:39.147655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.466 [2024-10-07 11:31:39.147673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.466 [2024-10-07 11:31:39.147687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.466 [2024-10-07 11:31:39.147717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.466 [2024-10-07 11:31:39.157516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.466 [2024-10-07 11:31:39.157631] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.466 [2024-10-07 11:31:39.157662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.466 [2024-10-07 11:31:39.157679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.467 [2024-10-07 11:31:39.158878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.467 [2024-10-07 11:31:39.159123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.467 [2024-10-07 11:31:39.159156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.467 [2024-10-07 11:31:39.159173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.467 [2024-10-07 11:31:39.159991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.467 [2024-10-07 11:31:39.167609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.467 [2024-10-07 11:31:39.167724] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.467 [2024-10-07 11:31:39.167756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.467 [2024-10-07 11:31:39.167773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.467 [2024-10-07 11:31:39.167805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.467 [2024-10-07 11:31:39.167837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.467 [2024-10-07 11:31:39.167855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.467 [2024-10-07 11:31:39.167883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.467 [2024-10-07 11:31:39.167916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.467 [2024-10-07 11:31:39.177964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.467 [2024-10-07 11:31:39.178108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.467 [2024-10-07 11:31:39.178140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.467 [2024-10-07 11:31:39.178158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.467 [2024-10-07 11:31:39.178189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.467 [2024-10-07 11:31:39.178221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.467 [2024-10-07 11:31:39.178238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.467 [2024-10-07 11:31:39.178252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.467 [2024-10-07 11:31:39.178282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.467 [2024-10-07 11:31:39.188573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.467 [2024-10-07 11:31:39.188685] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.467 [2024-10-07 11:31:39.188723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.467 [2024-10-07 11:31:39.188740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.467 [2024-10-07 11:31:39.188784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.467 [2024-10-07 11:31:39.188818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.467 [2024-10-07 11:31:39.188837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.467 [2024-10-07 11:31:39.188851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.467 [2024-10-07 11:31:39.188881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.467 [2024-10-07 11:31:39.198662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.467 [2024-10-07 11:31:39.198778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.467 [2024-10-07 11:31:39.198809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.467 [2024-10-07 11:31:39.198827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.467 [2024-10-07 11:31:39.198868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.467 [2024-10-07 11:31:39.198899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.467 [2024-10-07 11:31:39.198918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.467 [2024-10-07 11:31:39.198932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.467 [2024-10-07 11:31:39.200121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.467 [2024-10-07 11:31:39.208854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.467 [2024-10-07 11:31:39.208991] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.467 [2024-10-07 11:31:39.209023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.467 [2024-10-07 11:31:39.209040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.467 [2024-10-07 11:31:39.209072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.467 [2024-10-07 11:31:39.209104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.467 [2024-10-07 11:31:39.209121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.467 [2024-10-07 11:31:39.209135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.467 [2024-10-07 11:31:39.209166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.467 [2024-10-07 11:31:39.218968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.467 [2024-10-07 11:31:39.219243] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.467 [2024-10-07 11:31:39.219286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.467 [2024-10-07 11:31:39.219306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.467 [2024-10-07 11:31:39.219454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.467 [2024-10-07 11:31:39.219580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.467 [2024-10-07 11:31:39.219606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.467 [2024-10-07 11:31:39.219621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.467 [2024-10-07 11:31:39.219678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.467 [2024-10-07 11:31:39.229945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.467 [2024-10-07 11:31:39.230062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.467 [2024-10-07 11:31:39.230093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.467 [2024-10-07 11:31:39.230110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.467 [2024-10-07 11:31:39.230142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.467 [2024-10-07 11:31:39.230174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.467 [2024-10-07 11:31:39.230192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.467 [2024-10-07 11:31:39.230214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.467 [2024-10-07 11:31:39.230245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.467 [2024-10-07 11:31:39.240040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.467 [2024-10-07 11:31:39.240155] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.467 [2024-10-07 11:31:39.240186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.467 [2024-10-07 11:31:39.240204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.467 [2024-10-07 11:31:39.240236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.467 [2024-10-07 11:31:39.240284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.467 [2024-10-07 11:31:39.240312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.467 [2024-10-07 11:31:39.240344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.467 [2024-10-07 11:31:39.241538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.467 [2024-10-07 11:31:39.250327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.467 [2024-10-07 11:31:39.250446] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.467 [2024-10-07 11:31:39.250477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.467 [2024-10-07 11:31:39.250494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.467 [2024-10-07 11:31:39.250526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.467 [2024-10-07 11:31:39.250569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.467 [2024-10-07 11:31:39.250587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.467 [2024-10-07 11:31:39.250601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.467 [2024-10-07 11:31:39.250630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.467 [2024-10-07 11:31:39.260423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.467 [2024-10-07 11:31:39.260546] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.467 [2024-10-07 11:31:39.260577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.467 [2024-10-07 11:31:39.260595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.467 [2024-10-07 11:31:39.260783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.467 [2024-10-07 11:31:39.260925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.467 [2024-10-07 11:31:39.260950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.467 [2024-10-07 11:31:39.260965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.467 [2024-10-07 11:31:39.261082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.467 [2024-10-07 11:31:39.271699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.467 [2024-10-07 11:31:39.271857] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.467 [2024-10-07 11:31:39.271891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.467 [2024-10-07 11:31:39.271909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.467 [2024-10-07 11:31:39.271943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.467 [2024-10-07 11:31:39.271976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.467 [2024-10-07 11:31:39.271995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.467 [2024-10-07 11:31:39.272011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.467 [2024-10-07 11:31:39.272067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.467 [2024-10-07 11:31:39.281818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.467 [2024-10-07 11:31:39.281958] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.467 [2024-10-07 11:31:39.281991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.467 [2024-10-07 11:31:39.282010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.467 [2024-10-07 11:31:39.282043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.467 [2024-10-07 11:31:39.282089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.467 [2024-10-07 11:31:39.282110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.467 [2024-10-07 11:31:39.282126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.467 [2024-10-07 11:31:39.283355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.467 [2024-10-07 11:31:39.292177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.467 [2024-10-07 11:31:39.292343] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.467 [2024-10-07 11:31:39.292378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.467 [2024-10-07 11:31:39.292396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.467 [2024-10-07 11:31:39.292431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.467 [2024-10-07 11:31:39.292464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.467 [2024-10-07 11:31:39.292483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.467 [2024-10-07 11:31:39.292498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.467 [2024-10-07 11:31:39.292529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.468 [2024-10-07 11:31:39.302298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.468 [2024-10-07 11:31:39.302455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.468 [2024-10-07 11:31:39.302489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.468 [2024-10-07 11:31:39.302507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.468 [2024-10-07 11:31:39.302697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.468 [2024-10-07 11:31:39.302840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.468 [2024-10-07 11:31:39.302872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.468 [2024-10-07 11:31:39.302889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.468 [2024-10-07 11:31:39.303007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.468 [2024-10-07 11:31:39.313508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.468 [2024-10-07 11:31:39.313666] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.468 [2024-10-07 11:31:39.313700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.468 [2024-10-07 11:31:39.313745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.468 [2024-10-07 11:31:39.313783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.468 [2024-10-07 11:31:39.313816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.468 [2024-10-07 11:31:39.313834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.468 [2024-10-07 11:31:39.313850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.468 [2024-10-07 11:31:39.313881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.468 [2024-10-07 11:31:39.323633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.468 [2024-10-07 11:31:39.323794] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.468 [2024-10-07 11:31:39.323828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.468 [2024-10-07 11:31:39.323847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.468 [2024-10-07 11:31:39.323882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.468 [2024-10-07 11:31:39.323915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.468 [2024-10-07 11:31:39.323933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.468 [2024-10-07 11:31:39.323948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.468 [2024-10-07 11:31:39.325162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.468 [2024-10-07 11:31:39.334050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.468 [2024-10-07 11:31:39.334218] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.468 [2024-10-07 11:31:39.334252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.468 [2024-10-07 11:31:39.334271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.468 [2024-10-07 11:31:39.334336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.468 [2024-10-07 11:31:39.334375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.468 [2024-10-07 11:31:39.334394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.468 [2024-10-07 11:31:39.334410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.468 [2024-10-07 11:31:39.334441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.468 [2024-10-07 11:31:39.344167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.468 [2024-10-07 11:31:39.344313] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.468 [2024-10-07 11:31:39.344359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.468 [2024-10-07 11:31:39.344378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.468 [2024-10-07 11:31:39.344569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.468 [2024-10-07 11:31:39.344711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.468 [2024-10-07 11:31:39.344776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.468 [2024-10-07 11:31:39.344796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.468 [2024-10-07 11:31:39.344917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.468 [2024-10-07 11:31:39.355457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.468 [2024-10-07 11:31:39.355605] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.468 [2024-10-07 11:31:39.355638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.468 [2024-10-07 11:31:39.355663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.468 [2024-10-07 11:31:39.355696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.468 [2024-10-07 11:31:39.355729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.468 [2024-10-07 11:31:39.355747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.468 [2024-10-07 11:31:39.355762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.468 [2024-10-07 11:31:39.355794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.468 [2024-10-07 11:31:39.365568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.468 [2024-10-07 11:31:39.365722] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.468 [2024-10-07 11:31:39.365756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.468 [2024-10-07 11:31:39.365784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.468 [2024-10-07 11:31:39.365818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.468 [2024-10-07 11:31:39.365851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.468 [2024-10-07 11:31:39.365869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.468 [2024-10-07 11:31:39.365884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.468 [2024-10-07 11:31:39.365915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.468 [2024-10-07 11:31:39.376051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.468 [2024-10-07 11:31:39.376214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.468 [2024-10-07 11:31:39.376248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.468 [2024-10-07 11:31:39.376267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.468 [2024-10-07 11:31:39.376301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.468 [2024-10-07 11:31:39.376367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.468 [2024-10-07 11:31:39.376390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.468 [2024-10-07 11:31:39.376406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.468 [2024-10-07 11:31:39.376437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.468 [2024-10-07 11:31:39.386179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.468 [2024-10-07 11:31:39.386350] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.468 [2024-10-07 11:31:39.386385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.468 [2024-10-07 11:31:39.386404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.468 [2024-10-07 11:31:39.386595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.468 [2024-10-07 11:31:39.386737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.468 [2024-10-07 11:31:39.386763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.468 [2024-10-07 11:31:39.386778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.468 [2024-10-07 11:31:39.386895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.468 [2024-10-07 11:31:39.398822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.468 [2024-10-07 11:31:39.399673] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.468 [2024-10-07 11:31:39.399721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.468 [2024-10-07 11:31:39.399743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.468 [2024-10-07 11:31:39.399846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.468 [2024-10-07 11:31:39.399886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.468 [2024-10-07 11:31:39.399905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.468 [2024-10-07 11:31:39.399920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.468 [2024-10-07 11:31:39.399953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.468 [2024-10-07 11:31:39.410767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.468 [2024-10-07 11:31:39.412104] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.468 [2024-10-07 11:31:39.412151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.468 [2024-10-07 11:31:39.412172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.468 [2024-10-07 11:31:39.412313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.468 [2024-10-07 11:31:39.412380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.468 [2024-10-07 11:31:39.412401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.468 [2024-10-07 11:31:39.412428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.468 [2024-10-07 11:31:39.412460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.468 [2024-10-07 11:31:39.420894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.468 [2024-10-07 11:31:39.421986] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.468 [2024-10-07 11:31:39.422035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.468 [2024-10-07 11:31:39.422057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.468 [2024-10-07 11:31:39.422278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.468 [2024-10-07 11:31:39.422395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.468 [2024-10-07 11:31:39.422420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.468 [2024-10-07 11:31:39.422438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.468 [2024-10-07 11:31:39.422472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.468 [2024-10-07 11:31:39.431712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.468 [2024-10-07 11:31:39.431900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.468 [2024-10-07 11:31:39.431935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.468 [2024-10-07 11:31:39.431954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.468 [2024-10-07 11:31:39.431989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.468 [2024-10-07 11:31:39.432024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.468 [2024-10-07 11:31:39.432043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.468 [2024-10-07 11:31:39.432060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.468 [2024-10-07 11:31:39.432091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.468 [2024-10-07 11:31:39.442532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.468 [2024-10-07 11:31:39.442690] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.468 [2024-10-07 11:31:39.442724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.468 [2024-10-07 11:31:39.442742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.468 [2024-10-07 11:31:39.442777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.468 [2024-10-07 11:31:39.442822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.468 [2024-10-07 11:31:39.442843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.468 [2024-10-07 11:31:39.442859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.468 [2024-10-07 11:31:39.442890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.469 [2024-10-07 11:31:39.454595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.469 [2024-10-07 11:31:39.454775] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.469 [2024-10-07 11:31:39.454810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.469 [2024-10-07 11:31:39.454829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.469 [2024-10-07 11:31:39.454864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.469 [2024-10-07 11:31:39.454897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.469 [2024-10-07 11:31:39.454916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.469 [2024-10-07 11:31:39.454954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.469 [2024-10-07 11:31:39.454987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.469 [2024-10-07 11:31:39.464734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.469 [2024-10-07 11:31:39.464891] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.469 [2024-10-07 11:31:39.464925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.469 [2024-10-07 11:31:39.464943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.469 [2024-10-07 11:31:39.464976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.469 [2024-10-07 11:31:39.465009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.469 [2024-10-07 11:31:39.465027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.469 [2024-10-07 11:31:39.465043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.469 [2024-10-07 11:31:39.466247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.469 [2024-10-07 11:31:39.475097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.469 [2024-10-07 11:31:39.475275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.469 [2024-10-07 11:31:39.475312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.469 [2024-10-07 11:31:39.475346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.469 [2024-10-07 11:31:39.475384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.469 [2024-10-07 11:31:39.475433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.469 [2024-10-07 11:31:39.475455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.469 [2024-10-07 11:31:39.475471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.469 [2024-10-07 11:31:39.475502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.469 [2024-10-07 11:31:39.485239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.469 [2024-10-07 11:31:39.485436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.469 [2024-10-07 11:31:39.485472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.469 [2024-10-07 11:31:39.485491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.469 [2024-10-07 11:31:39.485690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.469 [2024-10-07 11:31:39.485835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.469 [2024-10-07 11:31:39.485871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.469 [2024-10-07 11:31:39.485890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.469 [2024-10-07 11:31:39.486011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.469 8384.33 IOPS, 32.75 MiB/s [2024-10-07T11:31:52.992Z] [2024-10-07 11:31:39.499808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.469 [2024-10-07 11:31:39.500162] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.469 [2024-10-07 11:31:39.500210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.469 [2024-10-07 11:31:39.500232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.469 [2024-10-07 11:31:39.500381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.469 [2024-10-07 11:31:39.500451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.469 [2024-10-07 11:31:39.500474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.469 [2024-10-07 11:31:39.500490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.469 [2024-10-07 11:31:39.500524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.469 [2024-10-07 11:31:39.510560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.469 [2024-10-07 11:31:39.510725] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.469 [2024-10-07 11:31:39.510759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.469 [2024-10-07 11:31:39.510778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.469 [2024-10-07 11:31:39.510814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.469 [2024-10-07 11:31:39.510847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.469 [2024-10-07 11:31:39.510866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.469 [2024-10-07 11:31:39.510881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.469 [2024-10-07 11:31:39.510912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.469 [2024-10-07 11:31:39.520683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.469 [2024-10-07 11:31:39.520831] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.469 [2024-10-07 11:31:39.520864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.469 [2024-10-07 11:31:39.520882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.469 [2024-10-07 11:31:39.520916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.469 [2024-10-07 11:31:39.520949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.469 [2024-10-07 11:31:39.520967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.469 [2024-10-07 11:31:39.520981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.469 [2024-10-07 11:31:39.522186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.469 [2024-10-07 11:31:39.531071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.469 [2024-10-07 11:31:39.531237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.469 [2024-10-07 11:31:39.531272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.469 [2024-10-07 11:31:39.531290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.469 [2024-10-07 11:31:39.531369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.469 [2024-10-07 11:31:39.531403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.469 [2024-10-07 11:31:39.531422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.469 [2024-10-07 11:31:39.531437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.469 [2024-10-07 11:31:39.531469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.469 [2024-10-07 11:31:39.541191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.469 [2024-10-07 11:31:39.541360] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.469 [2024-10-07 11:31:39.541394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.469 [2024-10-07 11:31:39.541413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.469 [2024-10-07 11:31:39.541619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.469 [2024-10-07 11:31:39.541762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.469 [2024-10-07 11:31:39.541799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.469 [2024-10-07 11:31:39.541818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.469 [2024-10-07 11:31:39.541946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.469 [2024-10-07 11:31:39.552540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.469 [2024-10-07 11:31:39.552706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.469 [2024-10-07 11:31:39.552739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.469 [2024-10-07 11:31:39.552758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.469 [2024-10-07 11:31:39.552793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.469 [2024-10-07 11:31:39.552826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.469 [2024-10-07 11:31:39.552845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.469 [2024-10-07 11:31:39.552861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.469 [2024-10-07 11:31:39.552897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.469 [2024-10-07 11:31:39.562680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.469 [2024-10-07 11:31:39.562836] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.469 [2024-10-07 11:31:39.562869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.469 [2024-10-07 11:31:39.562887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.469 [2024-10-07 11:31:39.562922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.469 [2024-10-07 11:31:39.562954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.469 [2024-10-07 11:31:39.562973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.469 [2024-10-07 11:31:39.563011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.469 [2024-10-07 11:31:39.563045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.469 [2024-10-07 11:31:39.573483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.469 [2024-10-07 11:31:39.573718] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.469 [2024-10-07 11:31:39.573752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.469 [2024-10-07 11:31:39.573771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.469 [2024-10-07 11:31:39.573812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.469 [2024-10-07 11:31:39.573847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.469 [2024-10-07 11:31:39.573866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.469 [2024-10-07 11:31:39.573881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.469 [2024-10-07 11:31:39.573912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.469 [2024-10-07 11:31:39.583602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.469 [2024-10-07 11:31:39.583719] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.469 [2024-10-07 11:31:39.583750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.469 [2024-10-07 11:31:39.583768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.469 [2024-10-07 11:31:39.583800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.469 [2024-10-07 11:31:39.583846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.469 [2024-10-07 11:31:39.583867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.469 [2024-10-07 11:31:39.583882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.469 [2024-10-07 11:31:39.583912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.469 [2024-10-07 11:31:39.595183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.469 [2024-10-07 11:31:39.595317] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.469 [2024-10-07 11:31:39.595363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.470 [2024-10-07 11:31:39.595382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.470 [2024-10-07 11:31:39.595415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.470 [2024-10-07 11:31:39.595448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.470 [2024-10-07 11:31:39.595466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.470 [2024-10-07 11:31:39.595481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.470 [2024-10-07 11:31:39.595512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.470 [2024-10-07 11:31:39.605299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.470 [2024-10-07 11:31:39.605433] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.470 [2024-10-07 11:31:39.605485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.470 [2024-10-07 11:31:39.605505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.470 [2024-10-07 11:31:39.605537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.470 [2024-10-07 11:31:39.606775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.470 [2024-10-07 11:31:39.606819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.470 [2024-10-07 11:31:39.606837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.470 [2024-10-07 11:31:39.607049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.470 [2024-10-07 11:31:39.615621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.470 [2024-10-07 11:31:39.615757] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.470 [2024-10-07 11:31:39.615790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.470 [2024-10-07 11:31:39.615808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.470 [2024-10-07 11:31:39.615841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.470 [2024-10-07 11:31:39.615874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.470 [2024-10-07 11:31:39.615893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.470 [2024-10-07 11:31:39.615908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.470 [2024-10-07 11:31:39.615939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.470 [2024-10-07 11:31:39.625727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.470 [2024-10-07 11:31:39.625864] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.470 [2024-10-07 11:31:39.625896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.470 [2024-10-07 11:31:39.625914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.470 [2024-10-07 11:31:39.626103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.470 [2024-10-07 11:31:39.626245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.470 [2024-10-07 11:31:39.626270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.470 [2024-10-07 11:31:39.626298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.470 [2024-10-07 11:31:39.626434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.470 [2024-10-07 11:31:39.636866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.470 [2024-10-07 11:31:39.636988] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.470 [2024-10-07 11:31:39.637020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.470 [2024-10-07 11:31:39.637038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.470 [2024-10-07 11:31:39.637071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.470 [2024-10-07 11:31:39.637126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.470 [2024-10-07 11:31:39.637146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.470 [2024-10-07 11:31:39.637160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.470 [2024-10-07 11:31:39.637191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.470 [2024-10-07 11:31:39.646960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.470 [2024-10-07 11:31:39.647077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.470 [2024-10-07 11:31:39.647108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.470 [2024-10-07 11:31:39.647126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.470 [2024-10-07 11:31:39.647158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.470 [2024-10-07 11:31:39.647190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.470 [2024-10-07 11:31:39.647209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.470 [2024-10-07 11:31:39.647231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.470 [2024-10-07 11:31:39.647268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.470 [2024-10-07 11:31:39.657289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.470 [2024-10-07 11:31:39.657431] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.470 [2024-10-07 11:31:39.657463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.470 [2024-10-07 11:31:39.657482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.470 [2024-10-07 11:31:39.657514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.470 [2024-10-07 11:31:39.657547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.470 [2024-10-07 11:31:39.657565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.470 [2024-10-07 11:31:39.657579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.470 [2024-10-07 11:31:39.657611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.470 [2024-10-07 11:31:39.667404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.470 [2024-10-07 11:31:39.667549] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.470 [2024-10-07 11:31:39.667583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.470 [2024-10-07 11:31:39.667608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.470 [2024-10-07 11:31:39.667641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.470 [2024-10-07 11:31:39.667673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.470 [2024-10-07 11:31:39.667691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.470 [2024-10-07 11:31:39.667707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.470 [2024-10-07 11:31:39.667894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.470 [2024-10-07 11:31:39.678878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.470 [2024-10-07 11:31:39.679037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.470 [2024-10-07 11:31:39.679070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.470 [2024-10-07 11:31:39.679089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.470 [2024-10-07 11:31:39.679128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.470 [2024-10-07 11:31:39.679161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.470 [2024-10-07 11:31:39.679179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.470 [2024-10-07 11:31:39.679195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.470 [2024-10-07 11:31:39.679226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.470 [2024-10-07 11:31:39.688995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.470 [2024-10-07 11:31:39.689115] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.470 [2024-10-07 11:31:39.689147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.470 [2024-10-07 11:31:39.689164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.470 [2024-10-07 11:31:39.689196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.470 [2024-10-07 11:31:39.689229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.470 [2024-10-07 11:31:39.689247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.470 [2024-10-07 11:31:39.689261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.470 [2024-10-07 11:31:39.689291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.470 [2024-10-07 11:31:39.699433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.470 [2024-10-07 11:31:39.699561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.470 [2024-10-07 11:31:39.699592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.470 [2024-10-07 11:31:39.699610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.470 [2024-10-07 11:31:39.699642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.470 [2024-10-07 11:31:39.699689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.470 [2024-10-07 11:31:39.699711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.470 [2024-10-07 11:31:39.699726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.470 [2024-10-07 11:31:39.699756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.470 [2024-10-07 11:31:39.709529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.470 [2024-10-07 11:31:39.709646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.470 [2024-10-07 11:31:39.709677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.470 [2024-10-07 11:31:39.709716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.470 [2024-10-07 11:31:39.709905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.470 [2024-10-07 11:31:39.710047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.470 [2024-10-07 11:31:39.710083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.470 [2024-10-07 11:31:39.710100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.470 [2024-10-07 11:31:39.710219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.470 [2024-10-07 11:31:39.720691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.470 [2024-10-07 11:31:39.720808] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.470 [2024-10-07 11:31:39.720841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.470 [2024-10-07 11:31:39.720858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.470 [2024-10-07 11:31:39.720891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.470 [2024-10-07 11:31:39.720926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.470 [2024-10-07 11:31:39.720947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.470 [2024-10-07 11:31:39.720961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.470 [2024-10-07 11:31:39.720992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.470 [2024-10-07 11:31:39.730782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.470 [2024-10-07 11:31:39.730905] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.470 [2024-10-07 11:31:39.730937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.470 [2024-10-07 11:31:39.730954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.470 [2024-10-07 11:31:39.730986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.470 [2024-10-07 11:31:39.731018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.470 [2024-10-07 11:31:39.731036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.470 [2024-10-07 11:31:39.731052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.470 [2024-10-07 11:31:39.731082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.470 [2024-10-07 11:31:39.741193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.470 [2024-10-07 11:31:39.741341] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.470 [2024-10-07 11:31:39.741374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.470 [2024-10-07 11:31:39.741391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.470 [2024-10-07 11:31:39.741424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.470 [2024-10-07 11:31:39.741457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.471 [2024-10-07 11:31:39.741493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.471 [2024-10-07 11:31:39.741508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.471 [2024-10-07 11:31:39.741540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.471 [2024-10-07 11:31:39.751310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.471 [2024-10-07 11:31:39.751439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.471 [2024-10-07 11:31:39.751471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.471 [2024-10-07 11:31:39.751488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.471 [2024-10-07 11:31:39.751675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.471 [2024-10-07 11:31:39.751818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.471 [2024-10-07 11:31:39.751853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.471 [2024-10-07 11:31:39.751870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.471 [2024-10-07 11:31:39.751988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.471 [2024-10-07 11:31:39.762576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.471 [2024-10-07 11:31:39.762693] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.471 [2024-10-07 11:31:39.762725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.471 [2024-10-07 11:31:39.762743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.471 [2024-10-07 11:31:39.762775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.471 [2024-10-07 11:31:39.762814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.471 [2024-10-07 11:31:39.762833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.471 [2024-10-07 11:31:39.762847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.471 [2024-10-07 11:31:39.762877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.471 [2024-10-07 11:31:39.772670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.471 [2024-10-07 11:31:39.772785] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.471 [2024-10-07 11:31:39.772817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.471 [2024-10-07 11:31:39.772834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.471 [2024-10-07 11:31:39.772865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.471 [2024-10-07 11:31:39.772908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.471 [2024-10-07 11:31:39.772925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.471 [2024-10-07 11:31:39.772939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.471 [2024-10-07 11:31:39.772968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.471 [2024-10-07 11:31:39.783254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.471 [2024-10-07 11:31:39.783411] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.471 [2024-10-07 11:31:39.783444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.471 [2024-10-07 11:31:39.783462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.471 [2024-10-07 11:31:39.783494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.471 [2024-10-07 11:31:39.783526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.471 [2024-10-07 11:31:39.783543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.471 [2024-10-07 11:31:39.783557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.471 [2024-10-07 11:31:39.783588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.471 [2024-10-07 11:31:39.793433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.471 [2024-10-07 11:31:39.793550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.471 [2024-10-07 11:31:39.793580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.471 [2024-10-07 11:31:39.793597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.471 [2024-10-07 11:31:39.793628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.471 [2024-10-07 11:31:39.793810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.471 [2024-10-07 11:31:39.793836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.471 [2024-10-07 11:31:39.793850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.471 [2024-10-07 11:31:39.793996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.471 [2024-10-07 11:31:39.804906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.471 [2024-10-07 11:31:39.805027] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.471 [2024-10-07 11:31:39.805063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.471 [2024-10-07 11:31:39.805080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.471 [2024-10-07 11:31:39.805113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.471 [2024-10-07 11:31:39.805144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.471 [2024-10-07 11:31:39.805163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.471 [2024-10-07 11:31:39.805176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.471 [2024-10-07 11:31:39.805206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.471 [2024-10-07 11:31:39.815015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.471 [2024-10-07 11:31:39.815134] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.471 [2024-10-07 11:31:39.815165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.471 [2024-10-07 11:31:39.815182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.471 [2024-10-07 11:31:39.815232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.471 [2024-10-07 11:31:39.815264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.471 [2024-10-07 11:31:39.815282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.471 [2024-10-07 11:31:39.815296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.471 [2024-10-07 11:31:39.815341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.471 [2024-10-07 11:31:39.825364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.471 [2024-10-07 11:31:39.825490] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.471 [2024-10-07 11:31:39.825521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.471 [2024-10-07 11:31:39.825538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.471 [2024-10-07 11:31:39.825571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.471 [2024-10-07 11:31:39.825603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.471 [2024-10-07 11:31:39.825621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.471 [2024-10-07 11:31:39.825636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.471 [2024-10-07 11:31:39.825666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.471 [2024-10-07 11:31:39.835453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.471 [2024-10-07 11:31:39.835575] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.471 [2024-10-07 11:31:39.835606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.471 [2024-10-07 11:31:39.835624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.471 [2024-10-07 11:31:39.835810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.471 [2024-10-07 11:31:39.835958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.471 [2024-10-07 11:31:39.835993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.471 [2024-10-07 11:31:39.836010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.471 [2024-10-07 11:31:39.836128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.471 [2024-10-07 11:31:39.846890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.471 [2024-10-07 11:31:39.847024] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.471 [2024-10-07 11:31:39.847055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.471 [2024-10-07 11:31:39.847073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.471 [2024-10-07 11:31:39.847105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.471 [2024-10-07 11:31:39.847137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.471 [2024-10-07 11:31:39.847155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.471 [2024-10-07 11:31:39.847190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.471 [2024-10-07 11:31:39.847224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.471 [2024-10-07 11:31:39.856982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.471 [2024-10-07 11:31:39.857099] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.471 [2024-10-07 11:31:39.857130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.471 [2024-10-07 11:31:39.857147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.471 [2024-10-07 11:31:39.857179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.471 [2024-10-07 11:31:39.857211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.471 [2024-10-07 11:31:39.857228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.471 [2024-10-07 11:31:39.857242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.471 [2024-10-07 11:31:39.857272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.471 [2024-10-07 11:31:39.867551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.471 [2024-10-07 11:31:39.867693] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.471 [2024-10-07 11:31:39.867724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.471 [2024-10-07 11:31:39.867758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.471 [2024-10-07 11:31:39.867789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.471 [2024-10-07 11:31:39.867820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.471 [2024-10-07 11:31:39.867838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.471 [2024-10-07 11:31:39.867851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.471 [2024-10-07 11:31:39.867881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.471 [2024-10-07 11:31:39.877647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.471 [2024-10-07 11:31:39.877764] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.471 [2024-10-07 11:31:39.877803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.471 [2024-10-07 11:31:39.877820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.471 [2024-10-07 11:31:39.877851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.471 [2024-10-07 11:31:39.877884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.471 [2024-10-07 11:31:39.877901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.471 [2024-10-07 11:31:39.877915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.471 [2024-10-07 11:31:39.878100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.471 [2024-10-07 11:31:39.888949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.471 [2024-10-07 11:31:39.889076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.471 [2024-10-07 11:31:39.889122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.471 [2024-10-07 11:31:39.889141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.471 [2024-10-07 11:31:39.889173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.471 [2024-10-07 11:31:39.889206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.471 [2024-10-07 11:31:39.889223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.471 [2024-10-07 11:31:39.889238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.471 [2024-10-07 11:31:39.889268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.472 [2024-10-07 11:31:39.899043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.472 [2024-10-07 11:31:39.899162] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.472 [2024-10-07 11:31:39.899193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.472 [2024-10-07 11:31:39.899210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.472 [2024-10-07 11:31:39.899242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.472 [2024-10-07 11:31:39.899274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.472 [2024-10-07 11:31:39.899292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.472 [2024-10-07 11:31:39.899307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.472 [2024-10-07 11:31:39.899353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.472 [2024-10-07 11:31:39.909405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.472 [2024-10-07 11:31:39.909530] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.472 [2024-10-07 11:31:39.909562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.472 [2024-10-07 11:31:39.909579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.472 [2024-10-07 11:31:39.909612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.472 [2024-10-07 11:31:39.909643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.472 [2024-10-07 11:31:39.909661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.472 [2024-10-07 11:31:39.909676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.472 [2024-10-07 11:31:39.909705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.472 [2024-10-07 11:31:39.919498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.472 [2024-10-07 11:31:39.919768] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.472 [2024-10-07 11:31:39.919811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.472 [2024-10-07 11:31:39.919831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.472 [2024-10-07 11:31:39.919963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.472 [2024-10-07 11:31:39.920106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.472 [2024-10-07 11:31:39.920132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.472 [2024-10-07 11:31:39.920147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.472 [2024-10-07 11:31:39.920203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.472 [2024-10-07 11:31:39.930523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.472 [2024-10-07 11:31:39.930640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.472 [2024-10-07 11:31:39.930672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.472 [2024-10-07 11:31:39.930689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.472 [2024-10-07 11:31:39.930721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.472 [2024-10-07 11:31:39.930752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.472 [2024-10-07 11:31:39.930770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.472 [2024-10-07 11:31:39.930785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.472 [2024-10-07 11:31:39.930815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.472 [2024-10-07 11:31:39.940613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.472 [2024-10-07 11:31:39.940730] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.472 [2024-10-07 11:31:39.940761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.472 [2024-10-07 11:31:39.940778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.472 [2024-10-07 11:31:39.940810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.472 [2024-10-07 11:31:39.942012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.472 [2024-10-07 11:31:39.942051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.472 [2024-10-07 11:31:39.942069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.472 [2024-10-07 11:31:39.942280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.472 [2024-10-07 11:31:39.950911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.472 [2024-10-07 11:31:39.951028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.472 [2024-10-07 11:31:39.951060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.472 [2024-10-07 11:31:39.951077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.472 [2024-10-07 11:31:39.951109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.472 [2024-10-07 11:31:39.951141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.472 [2024-10-07 11:31:39.951160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.472 [2024-10-07 11:31:39.951174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.472 [2024-10-07 11:31:39.951204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.472 [2024-10-07 11:31:39.961031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.472 [2024-10-07 11:31:39.961446] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.472 [2024-10-07 11:31:39.961508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.472 [2024-10-07 11:31:39.961541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.472 [2024-10-07 11:31:39.961722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.472 [2024-10-07 11:31:39.961911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.472 [2024-10-07 11:31:39.961962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.472 [2024-10-07 11:31:39.962008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.472 [2024-10-07 11:31:39.962095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.472 [2024-10-07 11:31:39.972210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.472 [2024-10-07 11:31:39.972349] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.472 [2024-10-07 11:31:39.972383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.472 [2024-10-07 11:31:39.972401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.472 [2024-10-07 11:31:39.972434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.472 [2024-10-07 11:31:39.972467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.472 [2024-10-07 11:31:39.972485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.472 [2024-10-07 11:31:39.972499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.472 [2024-10-07 11:31:39.972530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.472 [2024-10-07 11:31:39.982332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.472 [2024-10-07 11:31:39.982460] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.472 [2024-10-07 11:31:39.982492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.472 [2024-10-07 11:31:39.982509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.472 [2024-10-07 11:31:39.983702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.472 [2024-10-07 11:31:39.983934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.472 [2024-10-07 11:31:39.983981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.472 [2024-10-07 11:31:39.983999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.472 [2024-10-07 11:31:39.984819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.472 [2024-10-07 11:31:39.992472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.472 [2024-10-07 11:31:39.992589] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.472 [2024-10-07 11:31:39.992620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.472 [2024-10-07 11:31:39.992655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.472 [2024-10-07 11:31:39.992689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.472 [2024-10-07 11:31:39.992721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.472 [2024-10-07 11:31:39.992740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.472 [2024-10-07 11:31:39.992754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.472 [2024-10-07 11:31:39.992785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.472 [2024-10-07 11:31:40.002562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.472 [2024-10-07 11:31:40.002858] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.472 [2024-10-07 11:31:40.002903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.472 [2024-10-07 11:31:40.002923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.472 [2024-10-07 11:31:40.003055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.472 [2024-10-07 11:31:40.003181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.472 [2024-10-07 11:31:40.003216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.472 [2024-10-07 11:31:40.003233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.472 [2024-10-07 11:31:40.003289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.472 [2024-10-07 11:31:40.013629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.472 [2024-10-07 11:31:40.013748] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.472 [2024-10-07 11:31:40.013780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.472 [2024-10-07 11:31:40.013798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.472 [2024-10-07 11:31:40.013830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.472 [2024-10-07 11:31:40.013862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.472 [2024-10-07 11:31:40.013881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.472 [2024-10-07 11:31:40.013896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.472 [2024-10-07 11:31:40.013926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.472 [2024-10-07 11:31:40.023719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.472 [2024-10-07 11:31:40.023837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.472 [2024-10-07 11:31:40.023868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.472 [2024-10-07 11:31:40.023886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.472 [2024-10-07 11:31:40.023918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.472 [2024-10-07 11:31:40.023949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.472 [2024-10-07 11:31:40.023986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.472 [2024-10-07 11:31:40.024001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.472 [2024-10-07 11:31:40.024032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.472 [2024-10-07 11:31:40.034337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.472 [2024-10-07 11:31:40.034470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.472 [2024-10-07 11:31:40.034503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.472 [2024-10-07 11:31:40.034520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.472 [2024-10-07 11:31:40.034553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.472 [2024-10-07 11:31:40.034585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.472 [2024-10-07 11:31:40.034604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.472 [2024-10-07 11:31:40.034618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.472 [2024-10-07 11:31:40.034648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.472 [2024-10-07 11:31:40.044429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.472 [2024-10-07 11:31:40.044549] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.472 [2024-10-07 11:31:40.044579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.472 [2024-10-07 11:31:40.044597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.472 [2024-10-07 11:31:40.044783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.472 [2024-10-07 11:31:40.044925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.472 [2024-10-07 11:31:40.044970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.472 [2024-10-07 11:31:40.044997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.473 [2024-10-07 11:31:40.045119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.473 [2024-10-07 11:31:40.055704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.473 [2024-10-07 11:31:40.055833] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.473 [2024-10-07 11:31:40.055866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.473 [2024-10-07 11:31:40.055883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.473 [2024-10-07 11:31:40.055916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.473 [2024-10-07 11:31:40.055948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.473 [2024-10-07 11:31:40.055967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.473 [2024-10-07 11:31:40.055981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.473 [2024-10-07 11:31:40.056012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.473 [2024-10-07 11:31:40.065808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.473 [2024-10-07 11:31:40.065951] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.473 [2024-10-07 11:31:40.065984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.473 [2024-10-07 11:31:40.066002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.473 [2024-10-07 11:31:40.066034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.473 [2024-10-07 11:31:40.066067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.473 [2024-10-07 11:31:40.066085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.473 [2024-10-07 11:31:40.066099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.473 [2024-10-07 11:31:40.066130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.473 [2024-10-07 11:31:40.076645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.473 [2024-10-07 11:31:40.076771] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.473 [2024-10-07 11:31:40.076803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.473 [2024-10-07 11:31:40.076820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.473 [2024-10-07 11:31:40.076853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.473 [2024-10-07 11:31:40.076885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.473 [2024-10-07 11:31:40.076903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.473 [2024-10-07 11:31:40.076917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.473 [2024-10-07 11:31:40.076947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.473 [2024-10-07 11:31:40.086738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.473 [2024-10-07 11:31:40.086852] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.473 [2024-10-07 11:31:40.086884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.473 [2024-10-07 11:31:40.086902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.473 [2024-10-07 11:31:40.086935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.473 [2024-10-07 11:31:40.086967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.473 [2024-10-07 11:31:40.086985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.473 [2024-10-07 11:31:40.086999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.473 [2024-10-07 11:31:40.087197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.473 [2024-10-07 11:31:40.098398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.473 [2024-10-07 11:31:40.098525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.473 [2024-10-07 11:31:40.098557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.473 [2024-10-07 11:31:40.098575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.473 [2024-10-07 11:31:40.098630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.473 [2024-10-07 11:31:40.098663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.473 [2024-10-07 11:31:40.098682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.473 [2024-10-07 11:31:40.098697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.473 [2024-10-07 11:31:40.098727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.473 [2024-10-07 11:31:40.108566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.473 [2024-10-07 11:31:40.108683] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.473 [2024-10-07 11:31:40.108714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.473 [2024-10-07 11:31:40.108732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.473 [2024-10-07 11:31:40.108764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.473 [2024-10-07 11:31:40.108796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.473 [2024-10-07 11:31:40.108814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.473 [2024-10-07 11:31:40.108827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.473 [2024-10-07 11:31:40.108857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.473 [2024-10-07 11:31:40.119243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.473 [2024-10-07 11:31:40.119392] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.473 [2024-10-07 11:31:40.119425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.473 [2024-10-07 11:31:40.119442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.473 [2024-10-07 11:31:40.119475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.473 [2024-10-07 11:31:40.119508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.473 [2024-10-07 11:31:40.119526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.473 [2024-10-07 11:31:40.119540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.473 [2024-10-07 11:31:40.119571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.473 [2024-10-07 11:31:40.129351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.473 [2024-10-07 11:31:40.129470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.473 [2024-10-07 11:31:40.129502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.473 [2024-10-07 11:31:40.129520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.473 [2024-10-07 11:31:40.129553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.473 [2024-10-07 11:31:40.129586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.473 [2024-10-07 11:31:40.129604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.473 [2024-10-07 11:31:40.129643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.473 [2024-10-07 11:31:40.129676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.473 [2024-10-07 11:31:40.141478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.473 [2024-10-07 11:31:40.141610] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.473 [2024-10-07 11:31:40.141642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.473 [2024-10-07 11:31:40.141660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.473 [2024-10-07 11:31:40.141693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.473 [2024-10-07 11:31:40.141726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.473 [2024-10-07 11:31:40.141745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.473 [2024-10-07 11:31:40.141760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.473 [2024-10-07 11:31:40.141789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.473 [2024-10-07 11:31:40.151677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.473 [2024-10-07 11:31:40.151799] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.473 [2024-10-07 11:31:40.151831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.473 [2024-10-07 11:31:40.151849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.473 [2024-10-07 11:31:40.151881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.473 [2024-10-07 11:31:40.151914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.473 [2024-10-07 11:31:40.151932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.473 [2024-10-07 11:31:40.151947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.473 [2024-10-07 11:31:40.151976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.473 [2024-10-07 11:31:40.162415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.473 [2024-10-07 11:31:40.162548] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.473 [2024-10-07 11:31:40.162581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.473 [2024-10-07 11:31:40.162599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.473 [2024-10-07 11:31:40.162632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.473 [2024-10-07 11:31:40.162665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.473 [2024-10-07 11:31:40.162683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.473 [2024-10-07 11:31:40.162698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.473 [2024-10-07 11:31:40.162729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.473 [2024-10-07 11:31:40.172520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.473 [2024-10-07 11:31:40.172640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.473 [2024-10-07 11:31:40.172694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.473 [2024-10-07 11:31:40.172714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.473 [2024-10-07 11:31:40.172747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.473 [2024-10-07 11:31:40.172779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.473 [2024-10-07 11:31:40.172797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.473 [2024-10-07 11:31:40.172811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.473 [2024-10-07 11:31:40.172841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.473 [2024-10-07 11:31:40.184212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.473 [2024-10-07 11:31:40.184352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.473 [2024-10-07 11:31:40.184385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.473 [2024-10-07 11:31:40.184403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.473 [2024-10-07 11:31:40.184437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.473 [2024-10-07 11:31:40.184470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.473 [2024-10-07 11:31:40.184489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.473 [2024-10-07 11:31:40.184503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.473 [2024-10-07 11:31:40.184534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.473 [2024-10-07 11:31:40.194516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.473 [2024-10-07 11:31:40.194641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.473 [2024-10-07 11:31:40.194674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.473 [2024-10-07 11:31:40.194692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.473 [2024-10-07 11:31:40.194725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.473 [2024-10-07 11:31:40.194758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.473 [2024-10-07 11:31:40.194776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.473 [2024-10-07 11:31:40.194791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.473 [2024-10-07 11:31:40.194821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.474 [2024-10-07 11:31:40.205283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.474 [2024-10-07 11:31:40.205432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.474 [2024-10-07 11:31:40.205464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.474 [2024-10-07 11:31:40.205481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.474 [2024-10-07 11:31:40.205514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.474 [2024-10-07 11:31:40.205572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.474 [2024-10-07 11:31:40.205591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.474 [2024-10-07 11:31:40.205606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.474 [2024-10-07 11:31:40.205637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.474 [2024-10-07 11:31:40.215390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.474 [2024-10-07 11:31:40.215516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.474 [2024-10-07 11:31:40.215547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.474 [2024-10-07 11:31:40.215565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.474 [2024-10-07 11:31:40.215597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.474 [2024-10-07 11:31:40.215629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.474 [2024-10-07 11:31:40.215647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.474 [2024-10-07 11:31:40.215661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.474 [2024-10-07 11:31:40.215691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.474 [2024-10-07 11:31:40.227154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.474 [2024-10-07 11:31:40.227288] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.474 [2024-10-07 11:31:40.227333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.474 [2024-10-07 11:31:40.227353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.474 [2024-10-07 11:31:40.227387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.474 [2024-10-07 11:31:40.227419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.474 [2024-10-07 11:31:40.227437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.474 [2024-10-07 11:31:40.227452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.474 [2024-10-07 11:31:40.227483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.474 [2024-10-07 11:31:40.237339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.474 [2024-10-07 11:31:40.237468] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.474 [2024-10-07 11:31:40.237500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.474 [2024-10-07 11:31:40.237518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.474 [2024-10-07 11:31:40.237550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.474 [2024-10-07 11:31:40.237581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.474 [2024-10-07 11:31:40.237599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.474 [2024-10-07 11:31:40.237614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.474 [2024-10-07 11:31:40.237669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.474 [2024-10-07 11:31:40.247967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.474 [2024-10-07 11:31:40.248094] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.474 [2024-10-07 11:31:40.248125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.474 [2024-10-07 11:31:40.248143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.474 [2024-10-07 11:31:40.248175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.474 [2024-10-07 11:31:40.248207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.474 [2024-10-07 11:31:40.248225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.474 [2024-10-07 11:31:40.248240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.474 [2024-10-07 11:31:40.248270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.474 [2024-10-07 11:31:40.258062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.474 [2024-10-07 11:31:40.258186] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.474 [2024-10-07 11:31:40.258218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.474 [2024-10-07 11:31:40.258235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.474 [2024-10-07 11:31:40.258463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.474 [2024-10-07 11:31:40.258607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.474 [2024-10-07 11:31:40.258634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.474 [2024-10-07 11:31:40.258650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.474 [2024-10-07 11:31:40.258766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.474 [2024-10-07 11:31:40.269336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.474 [2024-10-07 11:31:40.269456] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.474 [2024-10-07 11:31:40.269488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.474 [2024-10-07 11:31:40.269506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.474 [2024-10-07 11:31:40.269538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.474 [2024-10-07 11:31:40.269571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.474 [2024-10-07 11:31:40.269589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.474 [2024-10-07 11:31:40.269603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.474 [2024-10-07 11:31:40.269633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.474 [2024-10-07 11:31:40.279433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.474 [2024-10-07 11:31:40.279551] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.474 [2024-10-07 11:31:40.279583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.474 [2024-10-07 11:31:40.279629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.474 [2024-10-07 11:31:40.279664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.474 [2024-10-07 11:31:40.279696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.474 [2024-10-07 11:31:40.279714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.474 [2024-10-07 11:31:40.279728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.474 [2024-10-07 11:31:40.279759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.474 [2024-10-07 11:31:40.290069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.474 [2024-10-07 11:31:40.290194] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.474 [2024-10-07 11:31:40.290226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.474 [2024-10-07 11:31:40.290244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.474 [2024-10-07 11:31:40.290276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.474 [2024-10-07 11:31:40.290340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.474 [2024-10-07 11:31:40.290362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.474 [2024-10-07 11:31:40.290376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.474 [2024-10-07 11:31:40.290407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.474 [2024-10-07 11:31:40.300164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.474 [2024-10-07 11:31:40.300291] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.474 [2024-10-07 11:31:40.300337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.474 [2024-10-07 11:31:40.300356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.474 [2024-10-07 11:31:40.300543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.474 [2024-10-07 11:31:40.300684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.474 [2024-10-07 11:31:40.300720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.474 [2024-10-07 11:31:40.300737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.474 [2024-10-07 11:31:40.300866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.474 [2024-10-07 11:31:40.311368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.474 [2024-10-07 11:31:40.311486] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.474 [2024-10-07 11:31:40.311517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.474 [2024-10-07 11:31:40.311534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.474 [2024-10-07 11:31:40.311566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.474 [2024-10-07 11:31:40.311598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.474 [2024-10-07 11:31:40.311636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.474 [2024-10-07 11:31:40.311651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.474 [2024-10-07 11:31:40.311683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.474 [2024-10-07 11:31:40.321462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.474 [2024-10-07 11:31:40.321579] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.474 [2024-10-07 11:31:40.321610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.474 [2024-10-07 11:31:40.321627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.474 [2024-10-07 11:31:40.321659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.474 [2024-10-07 11:31:40.321692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.474 [2024-10-07 11:31:40.321710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.474 [2024-10-07 11:31:40.321724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.474 [2024-10-07 11:31:40.321754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.474 [2024-10-07 11:31:40.331806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.474 [2024-10-07 11:31:40.331947] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.474 [2024-10-07 11:31:40.331978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.474 [2024-10-07 11:31:40.331996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.474 [2024-10-07 11:31:40.332029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.474 [2024-10-07 11:31:40.332076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.474 [2024-10-07 11:31:40.332098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.474 [2024-10-07 11:31:40.332112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.474 [2024-10-07 11:31:40.332142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.474 [2024-10-07 11:31:40.341911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.474 [2024-10-07 11:31:40.342027] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.474 [2024-10-07 11:31:40.342070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.474 [2024-10-07 11:31:40.342087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.474 [2024-10-07 11:31:40.342274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.474 [2024-10-07 11:31:40.342446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.474 [2024-10-07 11:31:40.342474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.474 [2024-10-07 11:31:40.342490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.474 [2024-10-07 11:31:40.342607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.474 [2024-10-07 11:31:40.353036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.474 [2024-10-07 11:31:40.353174] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.474 [2024-10-07 11:31:40.353206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.474 [2024-10-07 11:31:40.353223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.474 [2024-10-07 11:31:40.353255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.474 [2024-10-07 11:31:40.353287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.474 [2024-10-07 11:31:40.353306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.474 [2024-10-07 11:31:40.353334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.474 [2024-10-07 11:31:40.353369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.474 [2024-10-07 11:31:40.363146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.474 [2024-10-07 11:31:40.363263] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.474 [2024-10-07 11:31:40.363294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.474 [2024-10-07 11:31:40.363311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.475 [2024-10-07 11:31:40.363359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.475 [2024-10-07 11:31:40.363392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.475 [2024-10-07 11:31:40.363410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.475 [2024-10-07 11:31:40.363424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.475 [2024-10-07 11:31:40.363454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.475 [2024-10-07 11:31:40.373765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.475 [2024-10-07 11:31:40.373890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.475 [2024-10-07 11:31:40.373922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.475 [2024-10-07 11:31:40.373939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.475 [2024-10-07 11:31:40.373971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.475 [2024-10-07 11:31:40.374004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.475 [2024-10-07 11:31:40.374022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.475 [2024-10-07 11:31:40.374036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.475 [2024-10-07 11:31:40.374066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.475 [2024-10-07 11:31:40.383874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.475 [2024-10-07 11:31:40.384009] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.475 [2024-10-07 11:31:40.384041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.475 [2024-10-07 11:31:40.384058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.475 [2024-10-07 11:31:40.384112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.475 [2024-10-07 11:31:40.384144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.475 [2024-10-07 11:31:40.384162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.475 [2024-10-07 11:31:40.384176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.475 [2024-10-07 11:31:40.384386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.475 [2024-10-07 11:31:40.395462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.475 [2024-10-07 11:31:40.395581] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.475 [2024-10-07 11:31:40.395613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.475 [2024-10-07 11:31:40.395631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.475 [2024-10-07 11:31:40.395662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.475 [2024-10-07 11:31:40.395694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.475 [2024-10-07 11:31:40.395712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.475 [2024-10-07 11:31:40.395727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.475 [2024-10-07 11:31:40.395757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.475 [2024-10-07 11:31:40.405561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.475 [2024-10-07 11:31:40.405677] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.475 [2024-10-07 11:31:40.405709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.475 [2024-10-07 11:31:40.405726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.475 [2024-10-07 11:31:40.405758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.475 [2024-10-07 11:31:40.405789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.475 [2024-10-07 11:31:40.405807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.475 [2024-10-07 11:31:40.405821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.475 [2024-10-07 11:31:40.405865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.475 [2024-10-07 11:31:40.415967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.475 [2024-10-07 11:31:40.416092] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.475 [2024-10-07 11:31:40.416124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.475 [2024-10-07 11:31:40.416142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.475 [2024-10-07 11:31:40.416174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.475 [2024-10-07 11:31:40.416206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.475 [2024-10-07 11:31:40.416224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.475 [2024-10-07 11:31:40.416254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.475 [2024-10-07 11:31:40.416287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.475 [2024-10-07 11:31:40.427027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.475 [2024-10-07 11:31:40.427149] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.475 [2024-10-07 11:31:40.427181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.475 [2024-10-07 11:31:40.427198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.475 [2024-10-07 11:31:40.427230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.475 [2024-10-07 11:31:40.427276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.475 [2024-10-07 11:31:40.427297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.475 [2024-10-07 11:31:40.427312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.475 [2024-10-07 11:31:40.427361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.475 [2024-10-07 11:31:40.439182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.475 [2024-10-07 11:31:40.439372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.475 [2024-10-07 11:31:40.439405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.475 [2024-10-07 11:31:40.439422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.475 [2024-10-07 11:31:40.439455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.475 [2024-10-07 11:31:40.439488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.475 [2024-10-07 11:31:40.439506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.475 [2024-10-07 11:31:40.439520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.475 [2024-10-07 11:31:40.439565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.475 [2024-10-07 11:31:40.449278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.475 [2024-10-07 11:31:40.449404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.475 [2024-10-07 11:31:40.449437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.475 [2024-10-07 11:31:40.449455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.475 [2024-10-07 11:31:40.449486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.475 [2024-10-07 11:31:40.449518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.475 [2024-10-07 11:31:40.449536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.475 [2024-10-07 11:31:40.449551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.475 [2024-10-07 11:31:40.449581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.475 [2024-10-07 11:31:40.459933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.475 [2024-10-07 11:31:40.460058] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.475 [2024-10-07 11:31:40.460110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.475 [2024-10-07 11:31:40.460130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.475 [2024-10-07 11:31:40.460176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.475 [2024-10-07 11:31:40.460211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.475 [2024-10-07 11:31:40.460230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.475 [2024-10-07 11:31:40.460244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.475 [2024-10-07 11:31:40.460275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.475 [2024-10-07 11:31:40.470027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.475 [2024-10-07 11:31:40.470153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.475 [2024-10-07 11:31:40.470185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.475 [2024-10-07 11:31:40.470203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.475 [2024-10-07 11:31:40.470235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.475 [2024-10-07 11:31:40.470267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.475 [2024-10-07 11:31:40.470299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.475 [2024-10-07 11:31:40.470329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.475 [2024-10-07 11:31:40.470365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.475 [2024-10-07 11:31:40.481447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.475 [2024-10-07 11:31:40.481567] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.475 [2024-10-07 11:31:40.481599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.475 [2024-10-07 11:31:40.481617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.475 [2024-10-07 11:31:40.481649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.475 [2024-10-07 11:31:40.481682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.475 [2024-10-07 11:31:40.481700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.475 [2024-10-07 11:31:40.481715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.475 [2024-10-07 11:31:40.481745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.475 [2024-10-07 11:31:40.491538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.475 [2024-10-07 11:31:40.491653] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.475 [2024-10-07 11:31:40.491685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.475 [2024-10-07 11:31:40.491707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.475 [2024-10-07 11:31:40.491754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.475 [2024-10-07 11:31:40.491837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.475 [2024-10-07 11:31:40.491869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.475 [2024-10-07 11:31:40.491891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.475 [2024-10-07 11:31:40.493396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.475 8562.25 IOPS, 33.45 MiB/s [2024-10-07T11:31:52.998Z] [2024-10-07 11:31:40.502806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.475 [2024-10-07 11:31:40.503018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.475 [2024-10-07 11:31:40.503052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.475 [2024-10-07 11:31:40.503071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.475 [2024-10-07 11:31:40.504287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.475 [2024-10-07 11:31:40.505397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.475 [2024-10-07 11:31:40.505436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.475 [2024-10-07 11:31:40.505455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.475 [2024-10-07 11:31:40.505675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.475 [2024-10-07 11:31:40.512909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.475 [2024-10-07 11:31:40.513028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.475 [2024-10-07 11:31:40.513061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.475 [2024-10-07 11:31:40.513080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.475 [2024-10-07 11:31:40.513112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.475 [2024-10-07 11:31:40.513144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.475 [2024-10-07 11:31:40.513162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.475 [2024-10-07 11:31:40.513176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.475 [2024-10-07 11:31:40.513206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.475 [2024-10-07 11:31:40.523213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.475 [2024-10-07 11:31:40.523348] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.475 [2024-10-07 11:31:40.523382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.475 [2024-10-07 11:31:40.523400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.475 [2024-10-07 11:31:40.523434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.475 [2024-10-07 11:31:40.523466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.476 [2024-10-07 11:31:40.523484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.476 [2024-10-07 11:31:40.523498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.476 [2024-10-07 11:31:40.523551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.476 [2024-10-07 11:31:40.533308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.476 [2024-10-07 11:31:40.533439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.476 [2024-10-07 11:31:40.533471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.476 [2024-10-07 11:31:40.533489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.476 [2024-10-07 11:31:40.533521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.476 [2024-10-07 11:31:40.533553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.476 [2024-10-07 11:31:40.533572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.476 [2024-10-07 11:31:40.533586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.476 [2024-10-07 11:31:40.533616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.476 [2024-10-07 11:31:40.543758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.476 [2024-10-07 11:31:40.543890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.476 [2024-10-07 11:31:40.543922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.476 [2024-10-07 11:31:40.543939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.476 [2024-10-07 11:31:40.543972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.476 [2024-10-07 11:31:40.544004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.476 [2024-10-07 11:31:40.544022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.476 [2024-10-07 11:31:40.544036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.476 [2024-10-07 11:31:40.544066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.476 [2024-10-07 11:31:40.553857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.476 [2024-10-07 11:31:40.553976] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.476 [2024-10-07 11:31:40.554008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.476 [2024-10-07 11:31:40.554026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.476 [2024-10-07 11:31:40.554212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.476 [2024-10-07 11:31:40.554388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.476 [2024-10-07 11:31:40.554416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.476 [2024-10-07 11:31:40.554431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.476 [2024-10-07 11:31:40.554549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.476 [2024-10-07 11:31:40.565073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.476 [2024-10-07 11:31:40.565195] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.476 [2024-10-07 11:31:40.565228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.476 [2024-10-07 11:31:40.565272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.476 [2024-10-07 11:31:40.565307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.476 [2024-10-07 11:31:40.565357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.476 [2024-10-07 11:31:40.565376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.476 [2024-10-07 11:31:40.565391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.476 [2024-10-07 11:31:40.565421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.476 [2024-10-07 11:31:40.575173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.476 [2024-10-07 11:31:40.575294] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.476 [2024-10-07 11:31:40.575342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.476 [2024-10-07 11:31:40.575362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.476 [2024-10-07 11:31:40.575395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.476 [2024-10-07 11:31:40.575427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.476 [2024-10-07 11:31:40.575445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.476 [2024-10-07 11:31:40.575460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.476 [2024-10-07 11:31:40.575490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.476 [2024-10-07 11:31:40.585657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.476 [2024-10-07 11:31:40.585782] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.476 [2024-10-07 11:31:40.585814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.476 [2024-10-07 11:31:40.585831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.476 [2024-10-07 11:31:40.585864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.476 [2024-10-07 11:31:40.585912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.476 [2024-10-07 11:31:40.585933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.476 [2024-10-07 11:31:40.585948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.476 [2024-10-07 11:31:40.585979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.476 [2024-10-07 11:31:40.595755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.476 [2024-10-07 11:31:40.595878] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.476 [2024-10-07 11:31:40.595910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.476 [2024-10-07 11:31:40.595927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.476 [2024-10-07 11:31:40.596116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.476 [2024-10-07 11:31:40.596258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.476 [2024-10-07 11:31:40.596311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.476 [2024-10-07 11:31:40.596344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.476 [2024-10-07 11:31:40.596466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.476 [2024-10-07 11:31:40.607015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.476 [2024-10-07 11:31:40.607136] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.476 [2024-10-07 11:31:40.607168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.476 [2024-10-07 11:31:40.607186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.476 [2024-10-07 11:31:40.607218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.476 [2024-10-07 11:31:40.607250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.476 [2024-10-07 11:31:40.607268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.476 [2024-10-07 11:31:40.607282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.476 [2024-10-07 11:31:40.607312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.476 [2024-10-07 11:31:40.617115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.476 [2024-10-07 11:31:40.617232] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.476 [2024-10-07 11:31:40.617263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.476 [2024-10-07 11:31:40.617281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.476 [2024-10-07 11:31:40.617312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.476 [2024-10-07 11:31:40.617370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.476 [2024-10-07 11:31:40.617389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.476 [2024-10-07 11:31:40.617403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.476 [2024-10-07 11:31:40.617434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.476 [2024-10-07 11:31:40.627644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.476 [2024-10-07 11:31:40.627770] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.476 [2024-10-07 11:31:40.627801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.476 [2024-10-07 11:31:40.627819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.476 [2024-10-07 11:31:40.627851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.476 [2024-10-07 11:31:40.627884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.476 [2024-10-07 11:31:40.627903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.476 [2024-10-07 11:31:40.627917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.476 [2024-10-07 11:31:40.627948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.476 [2024-10-07 11:31:40.637754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.476 [2024-10-07 11:31:40.637870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.476 [2024-10-07 11:31:40.637902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.476 [2024-10-07 11:31:40.637920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.476 [2024-10-07 11:31:40.637951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.476 [2024-10-07 11:31:40.638138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.476 [2024-10-07 11:31:40.638174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.476 [2024-10-07 11:31:40.638192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.476 [2024-10-07 11:31:40.638360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.476 [2024-10-07 11:31:40.649012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.476 [2024-10-07 11:31:40.649132] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.476 [2024-10-07 11:31:40.649172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.476 [2024-10-07 11:31:40.649190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.476 [2024-10-07 11:31:40.649222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.476 [2024-10-07 11:31:40.649254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.476 [2024-10-07 11:31:40.649273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.476 [2024-10-07 11:31:40.649287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.476 [2024-10-07 11:31:40.649330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.476 [2024-10-07 11:31:40.659106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.476 [2024-10-07 11:31:40.659223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.476 [2024-10-07 11:31:40.659254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.476 [2024-10-07 11:31:40.659271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.476 [2024-10-07 11:31:40.659303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.476 [2024-10-07 11:31:40.659355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.476 [2024-10-07 11:31:40.659375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.476 [2024-10-07 11:31:40.659389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.476 [2024-10-07 11:31:40.659419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.476 [2024-10-07 11:31:40.669522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.476 [2024-10-07 11:31:40.669647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.476 [2024-10-07 11:31:40.669679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.476 [2024-10-07 11:31:40.669697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.476 [2024-10-07 11:31:40.669749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.476 [2024-10-07 11:31:40.669782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.476 [2024-10-07 11:31:40.669800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.476 [2024-10-07 11:31:40.669815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.476 [2024-10-07 11:31:40.669845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.476 [2024-10-07 11:31:40.679614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.476 [2024-10-07 11:31:40.679740] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.476 [2024-10-07 11:31:40.679772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.476 [2024-10-07 11:31:40.679790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.476 [2024-10-07 11:31:40.679821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.476 [2024-10-07 11:31:40.679854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.477 [2024-10-07 11:31:40.679871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.477 [2024-10-07 11:31:40.679886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.477 [2024-10-07 11:31:40.679924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.477 [2024-10-07 11:31:40.690928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.477 [2024-10-07 11:31:40.692217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.477 [2024-10-07 11:31:40.692263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.477 [2024-10-07 11:31:40.692284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.477 [2024-10-07 11:31:40.692444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.477 [2024-10-07 11:31:40.692486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.477 [2024-10-07 11:31:40.692505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.477 [2024-10-07 11:31:40.692520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.477 [2024-10-07 11:31:40.692551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.477 [2024-10-07 11:31:40.701025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.477 [2024-10-07 11:31:40.701143] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.477 [2024-10-07 11:31:40.701175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.477 [2024-10-07 11:31:40.701193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.477 [2024-10-07 11:31:40.702101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.477 [2024-10-07 11:31:40.702340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.477 [2024-10-07 11:31:40.702368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.477 [2024-10-07 11:31:40.702400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.477 [2024-10-07 11:31:40.702491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.477 [2024-10-07 11:31:40.712041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.477 [2024-10-07 11:31:40.712165] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.477 [2024-10-07 11:31:40.712196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.477 [2024-10-07 11:31:40.712214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.477 [2024-10-07 11:31:40.712246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.477 [2024-10-07 11:31:40.712277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.477 [2024-10-07 11:31:40.712295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.477 [2024-10-07 11:31:40.712309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.477 [2024-10-07 11:31:40.712360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.477 [2024-10-07 11:31:40.723167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.477 [2024-10-07 11:31:40.723293] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.477 [2024-10-07 11:31:40.723339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.477 [2024-10-07 11:31:40.723359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.477 [2024-10-07 11:31:40.723392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.477 [2024-10-07 11:31:40.723424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.477 [2024-10-07 11:31:40.723442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.477 [2024-10-07 11:31:40.723456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.477 [2024-10-07 11:31:40.723486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.477 [2024-10-07 11:31:40.734950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.477 [2024-10-07 11:31:40.735070] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.477 [2024-10-07 11:31:40.735101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.477 [2024-10-07 11:31:40.735119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.477 [2024-10-07 11:31:40.735151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.477 [2024-10-07 11:31:40.735183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.477 [2024-10-07 11:31:40.735202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.477 [2024-10-07 11:31:40.735216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.477 [2024-10-07 11:31:40.735247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.477 [2024-10-07 11:31:40.745048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.477 [2024-10-07 11:31:40.745181] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.477 [2024-10-07 11:31:40.745213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.477 [2024-10-07 11:31:40.745231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.477 [2024-10-07 11:31:40.745263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.477 [2024-10-07 11:31:40.745309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.477 [2024-10-07 11:31:40.745348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.477 [2024-10-07 11:31:40.745363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.477 [2024-10-07 11:31:40.745395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.477 [2024-10-07 11:31:40.755446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.477 [2024-10-07 11:31:40.755578] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.477 [2024-10-07 11:31:40.755611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.477 [2024-10-07 11:31:40.755629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.477 [2024-10-07 11:31:40.755662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.477 [2024-10-07 11:31:40.755694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.477 [2024-10-07 11:31:40.755713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.477 [2024-10-07 11:31:40.755727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.477 [2024-10-07 11:31:40.755757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.477 [2024-10-07 11:31:40.765550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.477 [2024-10-07 11:31:40.765683] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.477 [2024-10-07 11:31:40.765722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.477 [2024-10-07 11:31:40.765746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.477 [2024-10-07 11:31:40.765778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.477 [2024-10-07 11:31:40.765819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.477 [2024-10-07 11:31:40.765837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.477 [2024-10-07 11:31:40.765853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.477 [2024-10-07 11:31:40.766044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.477 [2024-10-07 11:31:40.777101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.477 [2024-10-07 11:31:40.777223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.477 [2024-10-07 11:31:40.777255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.477 [2024-10-07 11:31:40.777272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.477 [2024-10-07 11:31:40.777305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.477 [2024-10-07 11:31:40.777382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.477 [2024-10-07 11:31:40.777428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.477 [2024-10-07 11:31:40.777444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.477 [2024-10-07 11:31:40.777478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.477 [2024-10-07 11:31:40.787201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.477 [2024-10-07 11:31:40.787341] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.477 [2024-10-07 11:31:40.787382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.477 [2024-10-07 11:31:40.787399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.477 [2024-10-07 11:31:40.787432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.477 [2024-10-07 11:31:40.787465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.477 [2024-10-07 11:31:40.787482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.477 [2024-10-07 11:31:40.787496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.477 [2024-10-07 11:31:40.787526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.477 [2024-10-07 11:31:40.797728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.477 [2024-10-07 11:31:40.797858] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.477 [2024-10-07 11:31:40.797891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.477 [2024-10-07 11:31:40.797909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.477 [2024-10-07 11:31:40.797942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.477 [2024-10-07 11:31:40.797974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.477 [2024-10-07 11:31:40.797992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.477 [2024-10-07 11:31:40.798007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.477 [2024-10-07 11:31:40.798037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.477 [2024-10-07 11:31:40.807825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.477 [2024-10-07 11:31:40.807943] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.477 [2024-10-07 11:31:40.807976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.477 [2024-10-07 11:31:40.807996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.477 [2024-10-07 11:31:40.808194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.477 [2024-10-07 11:31:40.808351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.477 [2024-10-07 11:31:40.808382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.477 [2024-10-07 11:31:40.808399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.477 [2024-10-07 11:31:40.808536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.477 [2024-10-07 11:31:40.819133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.477 [2024-10-07 11:31:40.819256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.477 [2024-10-07 11:31:40.819289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.477 [2024-10-07 11:31:40.819310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.477 [2024-10-07 11:31:40.819359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.477 [2024-10-07 11:31:40.819392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.477 [2024-10-07 11:31:40.819411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.477 [2024-10-07 11:31:40.819425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.477 [2024-10-07 11:31:40.819455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.477 [2024-10-07 11:31:40.829231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.477 [2024-10-07 11:31:40.829365] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.477 [2024-10-07 11:31:40.829398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.477 [2024-10-07 11:31:40.829416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.477 [2024-10-07 11:31:40.829449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.477 [2024-10-07 11:31:40.829482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.477 [2024-10-07 11:31:40.829501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.477 [2024-10-07 11:31:40.829515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.477 [2024-10-07 11:31:40.829546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.477 [2024-10-07 11:31:40.839741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.477 [2024-10-07 11:31:40.839869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.477 [2024-10-07 11:31:40.839901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.477 [2024-10-07 11:31:40.839918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.478 [2024-10-07 11:31:40.839951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.478 [2024-10-07 11:31:40.839998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.478 [2024-10-07 11:31:40.840019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.478 [2024-10-07 11:31:40.840034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.478 [2024-10-07 11:31:40.840065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.478 [2024-10-07 11:31:40.849839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.478 [2024-10-07 11:31:40.849959] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.478 [2024-10-07 11:31:40.849991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.478 [2024-10-07 11:31:40.850027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.478 [2024-10-07 11:31:40.850219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.478 [2024-10-07 11:31:40.850395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.478 [2024-10-07 11:31:40.850432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.478 [2024-10-07 11:31:40.850449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.478 [2024-10-07 11:31:40.850570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.478 [2024-10-07 11:31:40.860993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.478 [2024-10-07 11:31:40.861111] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.478 [2024-10-07 11:31:40.861143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.478 [2024-10-07 11:31:40.861160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.478 [2024-10-07 11:31:40.861192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.478 [2024-10-07 11:31:40.861225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.478 [2024-10-07 11:31:40.861243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.478 [2024-10-07 11:31:40.861256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.478 [2024-10-07 11:31:40.861286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.478 [2024-10-07 11:31:40.871088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.478 [2024-10-07 11:31:40.871205] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.478 [2024-10-07 11:31:40.871236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.478 [2024-10-07 11:31:40.871254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.478 [2024-10-07 11:31:40.871286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.479 [2024-10-07 11:31:40.871332] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.479 [2024-10-07 11:31:40.871353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.479 [2024-10-07 11:31:40.871368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.479 [2024-10-07 11:31:40.872567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.479 [2024-10-07 11:31:40.881364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.479 [2024-10-07 11:31:40.881501] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.479 [2024-10-07 11:31:40.881535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.479 [2024-10-07 11:31:40.881552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.479 [2024-10-07 11:31:40.881585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.479 [2024-10-07 11:31:40.881617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.479 [2024-10-07 11:31:40.881650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.479 [2024-10-07 11:31:40.881666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.479 [2024-10-07 11:31:40.881699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.479 [2024-10-07 11:31:40.891471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.479 [2024-10-07 11:31:40.891747] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.479 [2024-10-07 11:31:40.891792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.479 [2024-10-07 11:31:40.891811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.479 [2024-10-07 11:31:40.891944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.479 [2024-10-07 11:31:40.892070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.479 [2024-10-07 11:31:40.892105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.479 [2024-10-07 11:31:40.892122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.479 [2024-10-07 11:31:40.892179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.479 [2024-10-07 11:31:40.902533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.479 [2024-10-07 11:31:40.902653] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.479 [2024-10-07 11:31:40.902686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.479 [2024-10-07 11:31:40.902703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.479 [2024-10-07 11:31:40.902734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.479 [2024-10-07 11:31:40.902767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.479 [2024-10-07 11:31:40.902785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.479 [2024-10-07 11:31:40.902799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.479 [2024-10-07 11:31:40.902830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.479 [2024-10-07 11:31:40.912628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.479 [2024-10-07 11:31:40.912750] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.479 [2024-10-07 11:31:40.912783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.479 [2024-10-07 11:31:40.912800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.479 [2024-10-07 11:31:40.914007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.479 [2024-10-07 11:31:40.914243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.479 [2024-10-07 11:31:40.914280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.479 [2024-10-07 11:31:40.914309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.479 [2024-10-07 11:31:40.915129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.479 [2024-10-07 11:31:40.922722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.479 [2024-10-07 11:31:40.922839] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.479 [2024-10-07 11:31:40.922871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.479 [2024-10-07 11:31:40.922888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.479 [2024-10-07 11:31:40.922919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.479 [2024-10-07 11:31:40.922951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.479 [2024-10-07 11:31:40.922969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.479 [2024-10-07 11:31:40.922983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.479 [2024-10-07 11:31:40.923013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.479 [2024-10-07 11:31:40.933135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.479 [2024-10-07 11:31:40.933274] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.479 [2024-10-07 11:31:40.933313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.479 [2024-10-07 11:31:40.933348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.479 [2024-10-07 11:31:40.933382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.479 [2024-10-07 11:31:40.933414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.479 [2024-10-07 11:31:40.933432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.479 [2024-10-07 11:31:40.933445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.479 [2024-10-07 11:31:40.933476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.479 [2024-10-07 11:31:40.943761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.479 [2024-10-07 11:31:40.943892] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.479 [2024-10-07 11:31:40.943933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.479 [2024-10-07 11:31:40.943951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.479 [2024-10-07 11:31:40.943984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.479 [2024-10-07 11:31:40.944026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.479 [2024-10-07 11:31:40.944045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.479 [2024-10-07 11:31:40.944059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.479 [2024-10-07 11:31:40.944090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.479 [2024-10-07 11:31:40.953868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.479 [2024-10-07 11:31:40.953992] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.479 [2024-10-07 11:31:40.954024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.479 [2024-10-07 11:31:40.954042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.479 [2024-10-07 11:31:40.954095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.479 [2024-10-07 11:31:40.954128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.479 [2024-10-07 11:31:40.954146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.479 [2024-10-07 11:31:40.954165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.479 [2024-10-07 11:31:40.954197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.479 [2024-10-07 11:31:40.964282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.479 [2024-10-07 11:31:40.964424] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.479 [2024-10-07 11:31:40.964457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.479 [2024-10-07 11:31:40.964476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.479 [2024-10-07 11:31:40.964508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.479 [2024-10-07 11:31:40.964540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.479 [2024-10-07 11:31:40.964558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.479 [2024-10-07 11:31:40.964573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.479 [2024-10-07 11:31:40.964604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.479 [2024-10-07 11:31:40.974393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.479 [2024-10-07 11:31:40.974513] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.479 [2024-10-07 11:31:40.974546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.479 [2024-10-07 11:31:40.974565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.479 [2024-10-07 11:31:40.974752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.479 [2024-10-07 11:31:40.974896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.479 [2024-10-07 11:31:40.974922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.479 [2024-10-07 11:31:40.974937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.479 [2024-10-07 11:31:40.975053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.479 [2024-10-07 11:31:40.985652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.479 [2024-10-07 11:31:40.985769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.479 [2024-10-07 11:31:40.985801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.479 [2024-10-07 11:31:40.985819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.479 [2024-10-07 11:31:40.985852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.479 [2024-10-07 11:31:40.985884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.479 [2024-10-07 11:31:40.985902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.479 [2024-10-07 11:31:40.985936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.479 [2024-10-07 11:31:40.985970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.479 [2024-10-07 11:31:40.995741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.479 [2024-10-07 11:31:40.995867] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.479 [2024-10-07 11:31:40.995899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.479 [2024-10-07 11:31:40.995916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.479 [2024-10-07 11:31:40.995948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.479 [2024-10-07 11:31:40.995979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.479 [2024-10-07 11:31:40.995997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.479 [2024-10-07 11:31:40.996012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.479 [2024-10-07 11:31:40.996042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.479 [2024-10-07 11:31:41.006170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.479 [2024-10-07 11:31:41.006307] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.479 [2024-10-07 11:31:41.006354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.479 [2024-10-07 11:31:41.006373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.479 [2024-10-07 11:31:41.006407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.479 [2024-10-07 11:31:41.006459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.479 [2024-10-07 11:31:41.006481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.479 [2024-10-07 11:31:41.006496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.479 [2024-10-07 11:31:41.006527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.479 [2024-10-07 11:31:41.016265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.479 [2024-10-07 11:31:41.016398] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.479 [2024-10-07 11:31:41.016431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.479 [2024-10-07 11:31:41.016449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.479 [2024-10-07 11:31:41.016481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.479 [2024-10-07 11:31:41.016668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.479 [2024-10-07 11:31:41.016696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.479 [2024-10-07 11:31:41.016711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.479 [2024-10-07 11:31:41.016846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.480 [2024-10-07 11:31:41.027665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.480 [2024-10-07 11:31:41.027830] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.480 [2024-10-07 11:31:41.027865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.480 [2024-10-07 11:31:41.027882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.480 [2024-10-07 11:31:41.027916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.480 [2024-10-07 11:31:41.027956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.480 [2024-10-07 11:31:41.027974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.480 [2024-10-07 11:31:41.027989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.480 [2024-10-07 11:31:41.028042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.480 [2024-10-07 11:31:41.037790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.480 [2024-10-07 11:31:41.037906] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.480 [2024-10-07 11:31:41.037949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.480 [2024-10-07 11:31:41.037967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.480 [2024-10-07 11:31:41.037999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.480 [2024-10-07 11:31:41.038031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.480 [2024-10-07 11:31:41.038054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.480 [2024-10-07 11:31:41.038068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.480 [2024-10-07 11:31:41.038098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.480 [2024-10-07 11:31:41.048864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.480 [2024-10-07 11:31:41.050648] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.480 [2024-10-07 11:31:41.050715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.480 [2024-10-07 11:31:41.050760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.480 [2024-10-07 11:31:41.052135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.480 [2024-10-07 11:31:41.052530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.480 [2024-10-07 11:31:41.052588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.480 [2024-10-07 11:31:41.052617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.480 [2024-10-07 11:31:41.052767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.480 [2024-10-07 11:31:41.059285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.480 [2024-10-07 11:31:41.059451] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.480 [2024-10-07 11:31:41.059487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.480 [2024-10-07 11:31:41.059505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.480 [2024-10-07 11:31:41.059542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.480 [2024-10-07 11:31:41.059605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.480 [2024-10-07 11:31:41.059625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.480 [2024-10-07 11:31:41.059639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.480 [2024-10-07 11:31:41.059675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.480 [2024-10-07 11:31:41.072829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.480 [2024-10-07 11:31:41.073116] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.480 [2024-10-07 11:31:41.073162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.480 [2024-10-07 11:31:41.073182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.480 [2024-10-07 11:31:41.073399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.480 [2024-10-07 11:31:41.073585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.480 [2024-10-07 11:31:41.073622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.480 [2024-10-07 11:31:41.073640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.480 [2024-10-07 11:31:41.073768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.480 [2024-10-07 11:31:41.084269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.480 [2024-10-07 11:31:41.084407] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.480 [2024-10-07 11:31:41.084441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.480 [2024-10-07 11:31:41.084459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.480 [2024-10-07 11:31:41.084495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.480 [2024-10-07 11:31:41.084531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.480 [2024-10-07 11:31:41.084550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.480 [2024-10-07 11:31:41.084564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.480 [2024-10-07 11:31:41.084599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.480 [2024-10-07 11:31:41.094390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.480 [2024-10-07 11:31:41.094522] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.480 [2024-10-07 11:31:41.094566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.480 [2024-10-07 11:31:41.094585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.480 [2024-10-07 11:31:41.094622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.480 [2024-10-07 11:31:41.094659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.480 [2024-10-07 11:31:41.094677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.480 [2024-10-07 11:31:41.094692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.480 [2024-10-07 11:31:41.094752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.480 [2024-10-07 11:31:41.104947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.480 [2024-10-07 11:31:41.105088] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.480 [2024-10-07 11:31:41.105120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.480 [2024-10-07 11:31:41.105138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.480 [2024-10-07 11:31:41.105176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.480 [2024-10-07 11:31:41.105212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.480 [2024-10-07 11:31:41.105231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.480 [2024-10-07 11:31:41.105245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.480 [2024-10-07 11:31:41.105280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.480 [2024-10-07 11:31:41.115059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.480 [2024-10-07 11:31:41.115187] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.480 [2024-10-07 11:31:41.115220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.480 [2024-10-07 11:31:41.115238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.480 [2024-10-07 11:31:41.115453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.480 [2024-10-07 11:31:41.115601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.480 [2024-10-07 11:31:41.115646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.480 [2024-10-07 11:31:41.115664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.480 [2024-10-07 11:31:41.115788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.480 [2024-10-07 11:31:41.126383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.480 [2024-10-07 11:31:41.126527] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.480 [2024-10-07 11:31:41.126560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.480 [2024-10-07 11:31:41.126579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.480 [2024-10-07 11:31:41.126616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.480 [2024-10-07 11:31:41.126653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.480 [2024-10-07 11:31:41.126672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.480 [2024-10-07 11:31:41.126688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.480 [2024-10-07 11:31:41.126723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.480 [2024-10-07 11:31:41.136502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.480 [2024-10-07 11:31:41.136638] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.480 [2024-10-07 11:31:41.136670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.480 [2024-10-07 11:31:41.136718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.480 [2024-10-07 11:31:41.136759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.480 [2024-10-07 11:31:41.136796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.480 [2024-10-07 11:31:41.136814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.480 [2024-10-07 11:31:41.136829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.480 [2024-10-07 11:31:41.136863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.480 [2024-10-07 11:31:41.147015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.480 [2024-10-07 11:31:41.147170] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.480 [2024-10-07 11:31:41.147204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.480 [2024-10-07 11:31:41.147222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.480 [2024-10-07 11:31:41.147261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.480 [2024-10-07 11:31:41.147331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.480 [2024-10-07 11:31:41.147355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.480 [2024-10-07 11:31:41.147371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.480 [2024-10-07 11:31:41.147407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.480 [2024-10-07 11:31:41.157138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.480 [2024-10-07 11:31:41.157279] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.480 [2024-10-07 11:31:41.157313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.480 [2024-10-07 11:31:41.157349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.480 [2024-10-07 11:31:41.157545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.480 [2024-10-07 11:31:41.157693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.480 [2024-10-07 11:31:41.157730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.480 [2024-10-07 11:31:41.157750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.480 [2024-10-07 11:31:41.157873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.480 [2024-10-07 11:31:41.168410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.480 [2024-10-07 11:31:41.168534] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.480 [2024-10-07 11:31:41.168566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.480 [2024-10-07 11:31:41.168584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.480 [2024-10-07 11:31:41.168621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.480 [2024-10-07 11:31:41.168657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.480 [2024-10-07 11:31:41.168695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.480 [2024-10-07 11:31:41.168711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.480 [2024-10-07 11:31:41.168747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.480 [2024-10-07 11:31:41.178516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.480 [2024-10-07 11:31:41.178665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.480 [2024-10-07 11:31:41.178699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.480 [2024-10-07 11:31:41.178717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.480 [2024-10-07 11:31:41.178755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.480 [2024-10-07 11:31:41.178793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.480 [2024-10-07 11:31:41.178812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.480 [2024-10-07 11:31:41.178827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.480 [2024-10-07 11:31:41.178863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.480 [2024-10-07 11:31:41.189182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.480 [2024-10-07 11:31:41.189356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.480 [2024-10-07 11:31:41.189391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.480 [2024-10-07 11:31:41.189409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.480 [2024-10-07 11:31:41.189449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.480 [2024-10-07 11:31:41.189487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.480 [2024-10-07 11:31:41.189506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.480 [2024-10-07 11:31:41.189529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.480 [2024-10-07 11:31:41.189564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.480 [2024-10-07 11:31:41.199306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.480 [2024-10-07 11:31:41.199467] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.480 [2024-10-07 11:31:41.199500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.480 [2024-10-07 11:31:41.199519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.481 [2024-10-07 11:31:41.199573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.481 [2024-10-07 11:31:41.199614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.481 [2024-10-07 11:31:41.199633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.481 [2024-10-07 11:31:41.199648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.481 [2024-10-07 11:31:41.199840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.481 [2024-10-07 11:31:41.210759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.481 [2024-10-07 11:31:41.210883] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.481 [2024-10-07 11:31:41.210915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.481 [2024-10-07 11:31:41.210933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.481 [2024-10-07 11:31:41.210969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.481 [2024-10-07 11:31:41.211006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.481 [2024-10-07 11:31:41.211024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.481 [2024-10-07 11:31:41.211038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.481 [2024-10-07 11:31:41.211073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.481 [2024-10-07 11:31:41.220870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.481 [2024-10-07 11:31:41.220990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.481 [2024-10-07 11:31:41.221023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.481 [2024-10-07 11:31:41.221040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.481 [2024-10-07 11:31:41.221078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.481 [2024-10-07 11:31:41.221113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.481 [2024-10-07 11:31:41.221132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.481 [2024-10-07 11:31:41.221146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.481 [2024-10-07 11:31:41.221198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.481 [2024-10-07 11:31:41.231386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.481 [2024-10-07 11:31:41.231515] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.481 [2024-10-07 11:31:41.231547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.481 [2024-10-07 11:31:41.231564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.481 [2024-10-07 11:31:41.231601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.481 [2024-10-07 11:31:41.231637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.481 [2024-10-07 11:31:41.231656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.481 [2024-10-07 11:31:41.231670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.481 [2024-10-07 11:31:41.231705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.481 [2024-10-07 11:31:41.241491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.481 [2024-10-07 11:31:41.241611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.481 [2024-10-07 11:31:41.241643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.481 [2024-10-07 11:31:41.241660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.481 [2024-10-07 11:31:41.241875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.481 [2024-10-07 11:31:41.242025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.481 [2024-10-07 11:31:41.242063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.481 [2024-10-07 11:31:41.242080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.481 [2024-10-07 11:31:41.242203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.481 [2024-10-07 11:31:41.253177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.481 [2024-10-07 11:31:41.253301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.481 [2024-10-07 11:31:41.253347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.481 [2024-10-07 11:31:41.253366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.481 [2024-10-07 11:31:41.253404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.481 [2024-10-07 11:31:41.253441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.481 [2024-10-07 11:31:41.253459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.481 [2024-10-07 11:31:41.253473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.481 [2024-10-07 11:31:41.253508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.481 [2024-10-07 11:31:41.263285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.481 [2024-10-07 11:31:41.263408] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.481 [2024-10-07 11:31:41.263440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.481 [2024-10-07 11:31:41.263457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.481 [2024-10-07 11:31:41.263493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.481 [2024-10-07 11:31:41.263530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.481 [2024-10-07 11:31:41.263548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.481 [2024-10-07 11:31:41.263562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.481 [2024-10-07 11:31:41.263596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.481 [2024-10-07 11:31:41.273748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.481 [2024-10-07 11:31:41.273881] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.481 [2024-10-07 11:31:41.273913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.481 [2024-10-07 11:31:41.273931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.481 [2024-10-07 11:31:41.273975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.481 [2024-10-07 11:31:41.274013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.481 [2024-10-07 11:31:41.274031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.481 [2024-10-07 11:31:41.274067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.481 [2024-10-07 11:31:41.274105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.481 [2024-10-07 11:31:41.284548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.481 [2024-10-07 11:31:41.284671] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.481 [2024-10-07 11:31:41.284703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.481 [2024-10-07 11:31:41.284722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.481 [2024-10-07 11:31:41.284766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.481 [2024-10-07 11:31:41.284803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.481 [2024-10-07 11:31:41.284822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.481 [2024-10-07 11:31:41.284837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.481 [2024-10-07 11:31:41.284872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.481 [2024-10-07 11:31:41.296028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.481 [2024-10-07 11:31:41.297341] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.481 [2024-10-07 11:31:41.297387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.481 [2024-10-07 11:31:41.297407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.481 [2024-10-07 11:31:41.297570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.481 [2024-10-07 11:31:41.297631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.481 [2024-10-07 11:31:41.297652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.481 [2024-10-07 11:31:41.297667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.481 [2024-10-07 11:31:41.297731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.481 [2024-10-07 11:31:41.306364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.481 [2024-10-07 11:31:41.306494] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.481 [2024-10-07 11:31:41.306526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.481 [2024-10-07 11:31:41.306544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.481 [2024-10-07 11:31:41.306581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.481 [2024-10-07 11:31:41.307485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.481 [2024-10-07 11:31:41.307523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.481 [2024-10-07 11:31:41.307541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.481 [2024-10-07 11:31:41.307736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.481 [2024-10-07 11:31:41.317522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.481 [2024-10-07 11:31:41.317661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.481 [2024-10-07 11:31:41.317693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.481 [2024-10-07 11:31:41.317711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.481 [2024-10-07 11:31:41.317747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.481 [2024-10-07 11:31:41.317794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.481 [2024-10-07 11:31:41.317815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.481 [2024-10-07 11:31:41.317830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.481 [2024-10-07 11:31:41.317864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.481 [2024-10-07 11:31:41.328698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.481 [2024-10-07 11:31:41.328818] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.481 [2024-10-07 11:31:41.328849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.481 [2024-10-07 11:31:41.328867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.481 [2024-10-07 11:31:41.328904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.481 [2024-10-07 11:31:41.328940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.481 [2024-10-07 11:31:41.328958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.481 [2024-10-07 11:31:41.328973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.481 [2024-10-07 11:31:41.329017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.481 [2024-10-07 11:31:41.340799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.481 [2024-10-07 11:31:41.340956] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.481 [2024-10-07 11:31:41.340988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.481 [2024-10-07 11:31:41.341009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.481 [2024-10-07 11:31:41.341045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.481 [2024-10-07 11:31:41.341082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.481 [2024-10-07 11:31:41.341100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.481 [2024-10-07 11:31:41.341114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.481 [2024-10-07 11:31:41.341149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.481 [2024-10-07 11:31:41.351117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.481 [2024-10-07 11:31:41.351236] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.481 [2024-10-07 11:31:41.351268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.481 [2024-10-07 11:31:41.351286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.481 [2024-10-07 11:31:41.351340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.481 [2024-10-07 11:31:41.351395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.481 [2024-10-07 11:31:41.351415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.481 [2024-10-07 11:31:41.351429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.481 [2024-10-07 11:31:41.351466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.481 [2024-10-07 11:31:41.361220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.481 [2024-10-07 11:31:41.361352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.481 [2024-10-07 11:31:41.361385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.481 [2024-10-07 11:31:41.361403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.481 [2024-10-07 11:31:41.361977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.481 [2024-10-07 11:31:41.362165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.481 [2024-10-07 11:31:41.362199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.481 [2024-10-07 11:31:41.362216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.481 [2024-10-07 11:31:41.362356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.481 [2024-10-07 11:31:41.371640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.481 [2024-10-07 11:31:41.371788] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.481 [2024-10-07 11:31:41.371821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.481 [2024-10-07 11:31:41.371838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.481 [2024-10-07 11:31:41.371874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.481 [2024-10-07 11:31:41.371910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.481 [2024-10-07 11:31:41.371929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.481 [2024-10-07 11:31:41.371943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.481 [2024-10-07 11:31:41.371978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.481 [2024-10-07 11:31:41.382147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.481 [2024-10-07 11:31:41.382204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.481 [2024-10-07 11:31:41.382233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.481 [2024-10-07 11:31:41.382249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.382281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.382357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.382391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.382421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.382452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.382483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.382514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.382545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.382576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.382607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.382639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.382671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.382701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.382731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.382772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.382803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.382836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.382868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.382899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.382929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.382960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.382976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.382991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.383741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.383772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.383803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.383834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.383864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.383894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.383925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.482 [2024-10-07 11:31:41.383956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.383979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.383994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.384010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.384025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.384042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.384056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.384073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.384087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.384104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.384119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.384135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.482 [2024-10-07 11:31:41.384150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.482 [2024-10-07 11:31:41.384167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.384182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.384213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.384244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.384275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.384308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.384352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.384391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.384423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.384455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.384486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.384517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.384548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.384585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.384615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.384646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.384677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.384708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.384739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.384770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.384809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.384840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.384872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.384903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.384934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.384965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.384983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.384998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.385029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.385060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.385091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.385122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.385153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.385191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.385223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.385254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.385285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.385327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.385362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.483 [2024-10-07 11:31:41.385393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.385424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.385455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.385486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.385517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.385548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.385579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.385616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.385648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.385679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.385709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.385741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.385772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.385803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.385839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.483 [2024-10-07 11:31:41.385870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe35020 is same with the state(6) to be set 00:20:57.483 [2024-10-07 11:31:41.385903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.483 [2024-10-07 11:31:41.385915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.483 [2024-10-07 11:31:41.385926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76544 len:8 PRP1 0x0 PRP2 0x0 00:20:57.483 [2024-10-07 11:31:41.385940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.385956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.483 [2024-10-07 11:31:41.385966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.483 [2024-10-07 11:31:41.385978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77096 len:8 PRP1 0x0 PRP2 0x0 00:20:57.483 [2024-10-07 11:31:41.385992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.386013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.483 [2024-10-07 11:31:41.386025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.483 [2024-10-07 11:31:41.386036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77104 len:8 PRP1 0x0 PRP2 0x0 00:20:57.483 [2024-10-07 11:31:41.386050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.386064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.483 [2024-10-07 11:31:41.386075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.483 [2024-10-07 11:31:41.386086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77112 len:8 PRP1 0x0 PRP2 0x0 00:20:57.483 [2024-10-07 11:31:41.386100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.386114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.483 [2024-10-07 11:31:41.386125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.483 [2024-10-07 11:31:41.386135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77120 len:8 PRP1 0x0 PRP2 0x0 00:20:57.483 [2024-10-07 11:31:41.386149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.386164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.483 [2024-10-07 11:31:41.386174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.483 [2024-10-07 11:31:41.386185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77128 len:8 PRP1 0x0 PRP2 0x0 00:20:57.483 [2024-10-07 11:31:41.386199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.386214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.483 [2024-10-07 11:31:41.386225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.483 [2024-10-07 11:31:41.386236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77136 len:8 PRP1 0x0 PRP2 0x0 00:20:57.483 [2024-10-07 11:31:41.386250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.483 [2024-10-07 11:31:41.386264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.483 [2024-10-07 11:31:41.386283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.483 [2024-10-07 11:31:41.386306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77144 len:8 PRP1 0x0 PRP2 0x0 00:20:57.484 [2024-10-07 11:31:41.386341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.484 [2024-10-07 11:31:41.386358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.484 [2024-10-07 11:31:41.386369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.484 [2024-10-07 11:31:41.386379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77152 len:8 PRP1 0x0 PRP2 0x0 00:20:57.484 [2024-10-07 11:31:41.386393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.484 [2024-10-07 11:31:41.386408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.484 [2024-10-07 11:31:41.386419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.484 [2024-10-07 11:31:41.386430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77160 len:8 PRP1 0x0 PRP2 0x0 00:20:57.484 [2024-10-07 11:31:41.386451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.484 [2024-10-07 11:31:41.386467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.484 [2024-10-07 11:31:41.386478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.484 [2024-10-07 11:31:41.386488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77168 len:8 PRP1 0x0 PRP2 0x0 00:20:57.484 [2024-10-07 11:31:41.386502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.484 [2024-10-07 11:31:41.386517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.484 [2024-10-07 11:31:41.386527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.484 [2024-10-07 11:31:41.386538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77176 len:8 PRP1 0x0 PRP2 0x0 00:20:57.484 [2024-10-07 11:31:41.386552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.484 [2024-10-07 11:31:41.386567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.484 [2024-10-07 11:31:41.386577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.484 [2024-10-07 11:31:41.386588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77184 len:8 PRP1 0x0 PRP2 0x0 00:20:57.484 [2024-10-07 11:31:41.386602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.484 [2024-10-07 11:31:41.386616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.484 [2024-10-07 11:31:41.386627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.484 [2024-10-07 11:31:41.386637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77192 len:8 PRP1 0x0 PRP2 0x0 00:20:57.484 [2024-10-07 11:31:41.386660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.484 [2024-10-07 11:31:41.386676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:57.484 [2024-10-07 11:31:41.386687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:57.484 [2024-10-07 11:31:41.386697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77200 len:8 PRP1 0x0 PRP2 0x0 00:20:57.484 [2024-10-07 11:31:41.386711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.484 [2024-10-07 11:31:41.386772] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe35020 was disconnected and freed. reset controller. 00:20:57.484 [2024-10-07 11:31:41.387886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.484 [2024-10-07 11:31:41.387973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.484 [2024-10-07 11:31:41.388136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.484 [2024-10-07 11:31:41.388488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.484 [2024-10-07 11:31:41.388523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.484 [2024-10-07 11:31:41.388542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.484 [2024-10-07 11:31:41.388595] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.484 [2024-10-07 11:31:41.388619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.484 [2024-10-07 11:31:41.388635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.484 [2024-10-07 11:31:41.388733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.484 [2024-10-07 11:31:41.388762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.484 [2024-10-07 11:31:41.388791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.484 [2024-10-07 11:31:41.388810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.484 [2024-10-07 11:31:41.388825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.484 [2024-10-07 11:31:41.388843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.484 [2024-10-07 11:31:41.388857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.484 [2024-10-07 11:31:41.388870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.484 [2024-10-07 11:31:41.388902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.484 [2024-10-07 11:31:41.388919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.484 [2024-10-07 11:31:41.398523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.484 [2024-10-07 11:31:41.398575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.484 [2024-10-07 11:31:41.398682] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.484 [2024-10-07 11:31:41.398721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.484 [2024-10-07 11:31:41.398738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.484 [2024-10-07 11:31:41.398787] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.484 [2024-10-07 11:31:41.398810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.484 [2024-10-07 11:31:41.398826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.484 [2024-10-07 11:31:41.398858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.484 [2024-10-07 11:31:41.398882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.484 [2024-10-07 11:31:41.398909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.484 [2024-10-07 11:31:41.398928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.484 [2024-10-07 11:31:41.398942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.484 [2024-10-07 11:31:41.398958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.484 [2024-10-07 11:31:41.398972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.484 [2024-10-07 11:31:41.398985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.484 [2024-10-07 11:31:41.399032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.484 [2024-10-07 11:31:41.399053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.484 [2024-10-07 11:31:41.408653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.484 [2024-10-07 11:31:41.408726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.484 [2024-10-07 11:31:41.408827] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.484 [2024-10-07 11:31:41.408856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.484 [2024-10-07 11:31:41.408873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.484 [2024-10-07 11:31:41.408941] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.484 [2024-10-07 11:31:41.408968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.484 [2024-10-07 11:31:41.408985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.484 [2024-10-07 11:31:41.409003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.484 [2024-10-07 11:31:41.409035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.484 [2024-10-07 11:31:41.409057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.484 [2024-10-07 11:31:41.409071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.484 [2024-10-07 11:31:41.409085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.484 [2024-10-07 11:31:41.409270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.484 [2024-10-07 11:31:41.409307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.484 [2024-10-07 11:31:41.409339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.484 [2024-10-07 11:31:41.409355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.484 [2024-10-07 11:31:41.409490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.484 [2024-10-07 11:31:41.420193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.484 [2024-10-07 11:31:41.420244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.484 [2024-10-07 11:31:41.420389] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.484 [2024-10-07 11:31:41.420422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.484 [2024-10-07 11:31:41.420440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.484 [2024-10-07 11:31:41.420489] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.484 [2024-10-07 11:31:41.420513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.484 [2024-10-07 11:31:41.420529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.484 [2024-10-07 11:31:41.420562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.484 [2024-10-07 11:31:41.420587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.484 [2024-10-07 11:31:41.420631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.484 [2024-10-07 11:31:41.420653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.484 [2024-10-07 11:31:41.420667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.484 [2024-10-07 11:31:41.420684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.484 [2024-10-07 11:31:41.420712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.484 [2024-10-07 11:31:41.420727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.484 [2024-10-07 11:31:41.420758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.484 [2024-10-07 11:31:41.420776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.484 [2024-10-07 11:31:41.430347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.484 [2024-10-07 11:31:41.430419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.484 [2024-10-07 11:31:41.430498] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.484 [2024-10-07 11:31:41.430527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.484 [2024-10-07 11:31:41.430544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.484 [2024-10-07 11:31:41.430617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.484 [2024-10-07 11:31:41.430643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.484 [2024-10-07 11:31:41.430659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.484 [2024-10-07 11:31:41.430678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.484 [2024-10-07 11:31:41.430711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.484 [2024-10-07 11:31:41.430732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.484 [2024-10-07 11:31:41.430746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.484 [2024-10-07 11:31:41.430760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.484 [2024-10-07 11:31:41.430790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.484 [2024-10-07 11:31:41.430808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.484 [2024-10-07 11:31:41.430822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.484 [2024-10-07 11:31:41.430835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.484 [2024-10-07 11:31:41.430862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.484 [2024-10-07 11:31:41.441176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.484 [2024-10-07 11:31:41.441224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.484 [2024-10-07 11:31:41.441343] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.484 [2024-10-07 11:31:41.441375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.484 [2024-10-07 11:31:41.441392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.484 [2024-10-07 11:31:41.441442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.484 [2024-10-07 11:31:41.441465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.484 [2024-10-07 11:31:41.441481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.484 [2024-10-07 11:31:41.441513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.484 [2024-10-07 11:31:41.441552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.484 [2024-10-07 11:31:41.441581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.484 [2024-10-07 11:31:41.441600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.484 [2024-10-07 11:31:41.441614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.484 [2024-10-07 11:31:41.441630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.484 [2024-10-07 11:31:41.441644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.484 [2024-10-07 11:31:41.441657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.484 [2024-10-07 11:31:41.441687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.484 [2024-10-07 11:31:41.441704] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.484 [2024-10-07 11:31:41.452663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.484 [2024-10-07 11:31:41.452712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.484 [2024-10-07 11:31:41.452806] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.484 [2024-10-07 11:31:41.452837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.484 [2024-10-07 11:31:41.452855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.484 [2024-10-07 11:31:41.452903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.484 [2024-10-07 11:31:41.452926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.484 [2024-10-07 11:31:41.452942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.484 [2024-10-07 11:31:41.452974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.484 [2024-10-07 11:31:41.452998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.484 [2024-10-07 11:31:41.453025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.484 [2024-10-07 11:31:41.453043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.484 [2024-10-07 11:31:41.453057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.484 [2024-10-07 11:31:41.453072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.484 [2024-10-07 11:31:41.453087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.484 [2024-10-07 11:31:41.453100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.485 [2024-10-07 11:31:41.453129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.485 [2024-10-07 11:31:41.453146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.485 [2024-10-07 11:31:41.463876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.485 [2024-10-07 11:31:41.463923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.485 [2024-10-07 11:31:41.465260] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.485 [2024-10-07 11:31:41.465305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.485 [2024-10-07 11:31:41.465373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.485 [2024-10-07 11:31:41.465429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.485 [2024-10-07 11:31:41.465454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.485 [2024-10-07 11:31:41.465470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.485 [2024-10-07 11:31:41.465627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.485 [2024-10-07 11:31:41.465659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.485 [2024-10-07 11:31:41.465689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.485 [2024-10-07 11:31:41.465707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.485 [2024-10-07 11:31:41.465721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.485 [2024-10-07 11:31:41.465738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.485 [2024-10-07 11:31:41.465752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.485 [2024-10-07 11:31:41.465766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.485 [2024-10-07 11:31:41.465796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.485 [2024-10-07 11:31:41.465814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.485 [2024-10-07 11:31:41.474086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.485 [2024-10-07 11:31:41.474136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.485 [2024-10-07 11:31:41.474227] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.485 [2024-10-07 11:31:41.474257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.485 [2024-10-07 11:31:41.474274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.485 [2024-10-07 11:31:41.474352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.485 [2024-10-07 11:31:41.474379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.485 [2024-10-07 11:31:41.474395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.485 [2024-10-07 11:31:41.475277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.485 [2024-10-07 11:31:41.475335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.485 [2024-10-07 11:31:41.475526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.485 [2024-10-07 11:31:41.475552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.485 [2024-10-07 11:31:41.475566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.485 [2024-10-07 11:31:41.475583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.485 [2024-10-07 11:31:41.475597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.485 [2024-10-07 11:31:41.475625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.485 [2024-10-07 11:31:41.475701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.485 [2024-10-07 11:31:41.475721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.485 [2024-10-07 11:31:41.484896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.485 [2024-10-07 11:31:41.484945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.485 [2024-10-07 11:31:41.485054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.485 [2024-10-07 11:31:41.485085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.485 [2024-10-07 11:31:41.485102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.485 [2024-10-07 11:31:41.485150] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.485 [2024-10-07 11:31:41.485174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.485 [2024-10-07 11:31:41.485190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.485 [2024-10-07 11:31:41.485223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.485 [2024-10-07 11:31:41.485247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.485 [2024-10-07 11:31:41.485273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.485 [2024-10-07 11:31:41.485291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.485 [2024-10-07 11:31:41.485305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.485 [2024-10-07 11:31:41.485321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.485 [2024-10-07 11:31:41.485351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.485 [2024-10-07 11:31:41.485366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.485 [2024-10-07 11:31:41.485933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.485 [2024-10-07 11:31:41.485960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.485 [2024-10-07 11:31:41.495792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.485 [2024-10-07 11:31:41.495841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.485 [2024-10-07 11:31:41.495934] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.485 [2024-10-07 11:31:41.495964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.485 [2024-10-07 11:31:41.495981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.485 [2024-10-07 11:31:41.496029] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.485 [2024-10-07 11:31:41.496051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.485 [2024-10-07 11:31:41.496067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.485 [2024-10-07 11:31:41.496099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.485 [2024-10-07 11:31:41.496122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.485 [2024-10-07 11:31:41.496167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.485 [2024-10-07 11:31:41.496186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.485 [2024-10-07 11:31:41.496200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.485 [2024-10-07 11:31:41.496217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.485 [2024-10-07 11:31:41.496231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.485 [2024-10-07 11:31:41.496244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.485 [2024-10-07 11:31:41.496275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.485 [2024-10-07 11:31:41.496292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.485 8643.60 IOPS, 33.76 MiB/s [2024-10-07T11:31:53.008Z] [2024-10-07 11:31:41.507506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.485 [2024-10-07 11:31:41.507558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.485 [2024-10-07 11:31:41.507690] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.485 [2024-10-07 11:31:41.507723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.485 [2024-10-07 11:31:41.507741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.485 [2024-10-07 11:31:41.507790] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.485 [2024-10-07 11:31:41.507813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.485 [2024-10-07 11:31:41.507829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.485 [2024-10-07 11:31:41.507862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.485 [2024-10-07 11:31:41.507885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.485 [2024-10-07 11:31:41.507930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.485 [2024-10-07 11:31:41.507952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.485 [2024-10-07 11:31:41.507966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.485 [2024-10-07 11:31:41.507983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.485 [2024-10-07 11:31:41.507997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.485 [2024-10-07 11:31:41.508011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.485 [2024-10-07 11:31:41.508041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.485 [2024-10-07 11:31:41.508058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.485 [2024-10-07 11:31:41.517629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.485 [2024-10-07 11:31:41.517702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.485 [2024-10-07 11:31:41.517781] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.485 [2024-10-07 11:31:41.517809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.485 [2024-10-07 11:31:41.517841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.485 [2024-10-07 11:31:41.517908] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.485 [2024-10-07 11:31:41.517935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.485 [2024-10-07 11:31:41.517951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.485 [2024-10-07 11:31:41.517969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.485 [2024-10-07 11:31:41.518001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.485 [2024-10-07 11:31:41.518022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.485 [2024-10-07 11:31:41.518036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.485 [2024-10-07 11:31:41.518050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.485 [2024-10-07 11:31:41.518080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.485 [2024-10-07 11:31:41.518098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.485 [2024-10-07 11:31:41.518111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.485 [2024-10-07 11:31:41.518124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.485 [2024-10-07 11:31:41.519332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.485 [2024-10-07 11:31:41.528138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.485 [2024-10-07 11:31:41.528188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.485 [2024-10-07 11:31:41.528288] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.485 [2024-10-07 11:31:41.528333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.485 [2024-10-07 11:31:41.528353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.485 [2024-10-07 11:31:41.528404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.485 [2024-10-07 11:31:41.528427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.485 [2024-10-07 11:31:41.528442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.485 [2024-10-07 11:31:41.528475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.485 [2024-10-07 11:31:41.528499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.485 [2024-10-07 11:31:41.528526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.485 [2024-10-07 11:31:41.528544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.485 [2024-10-07 11:31:41.528558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.485 [2024-10-07 11:31:41.528574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.485 [2024-10-07 11:31:41.528588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.485 [2024-10-07 11:31:41.528601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.485 [2024-10-07 11:31:41.529824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.485 [2024-10-07 11:31:41.529862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.485 [2024-10-07 11:31:41.538265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.485 [2024-10-07 11:31:41.538337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.485 [2024-10-07 11:31:41.538432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.485 [2024-10-07 11:31:41.538462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.485 [2024-10-07 11:31:41.538480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.485 [2024-10-07 11:31:41.538527] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.485 [2024-10-07 11:31:41.538550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.485 [2024-10-07 11:31:41.538566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.486 [2024-10-07 11:31:41.538752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.486 [2024-10-07 11:31:41.538784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.486 [2024-10-07 11:31:41.538913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.486 [2024-10-07 11:31:41.538938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.486 [2024-10-07 11:31:41.538953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.486 [2024-10-07 11:31:41.538970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.486 [2024-10-07 11:31:41.538984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.486 [2024-10-07 11:31:41.538997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.486 [2024-10-07 11:31:41.539113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.486 [2024-10-07 11:31:41.539136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.486 [2024-10-07 11:31:41.549504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.486 [2024-10-07 11:31:41.549553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.486 [2024-10-07 11:31:41.549644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.486 [2024-10-07 11:31:41.549674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.486 [2024-10-07 11:31:41.549691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.486 [2024-10-07 11:31:41.549738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.486 [2024-10-07 11:31:41.549762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.486 [2024-10-07 11:31:41.549777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.486 [2024-10-07 11:31:41.549809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.486 [2024-10-07 11:31:41.549833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.486 [2024-10-07 11:31:41.549860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.486 [2024-10-07 11:31:41.549893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.486 [2024-10-07 11:31:41.549909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.486 [2024-10-07 11:31:41.549926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.486 [2024-10-07 11:31:41.549940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.486 [2024-10-07 11:31:41.549953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.486 [2024-10-07 11:31:41.549984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.486 [2024-10-07 11:31:41.550001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.486 [2024-10-07 11:31:41.559628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.486 [2024-10-07 11:31:41.559700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.486 [2024-10-07 11:31:41.559779] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.486 [2024-10-07 11:31:41.559807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.486 [2024-10-07 11:31:41.559823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.486 [2024-10-07 11:31:41.559887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.486 [2024-10-07 11:31:41.559914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.486 [2024-10-07 11:31:41.559930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.486 [2024-10-07 11:31:41.559948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.486 [2024-10-07 11:31:41.561159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.486 [2024-10-07 11:31:41.561202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.486 [2024-10-07 11:31:41.561221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.486 [2024-10-07 11:31:41.561235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.486 [2024-10-07 11:31:41.561457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.486 [2024-10-07 11:31:41.561483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.486 [2024-10-07 11:31:41.561499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.486 [2024-10-07 11:31:41.561513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.486 [2024-10-07 11:31:41.562338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.486 [2024-10-07 11:31:41.570052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.486 [2024-10-07 11:31:41.570102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.486 [2024-10-07 11:31:41.570203] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.486 [2024-10-07 11:31:41.570234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.486 [2024-10-07 11:31:41.570251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.486 [2024-10-07 11:31:41.570345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.486 [2024-10-07 11:31:41.570373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.486 [2024-10-07 11:31:41.570390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.486 [2024-10-07 11:31:41.570424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.486 [2024-10-07 11:31:41.570447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.486 [2024-10-07 11:31:41.570474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.486 [2024-10-07 11:31:41.570492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.486 [2024-10-07 11:31:41.570506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.486 [2024-10-07 11:31:41.570523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.486 [2024-10-07 11:31:41.570537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.486 [2024-10-07 11:31:41.570550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.486 [2024-10-07 11:31:41.570579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.486 [2024-10-07 11:31:41.570596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.486 [2024-10-07 11:31:41.580180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.486 [2024-10-07 11:31:41.580231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.486 [2024-10-07 11:31:41.580338] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.486 [2024-10-07 11:31:41.580370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.486 [2024-10-07 11:31:41.580387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.486 [2024-10-07 11:31:41.580436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.486 [2024-10-07 11:31:41.580459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.486 [2024-10-07 11:31:41.580475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.486 [2024-10-07 11:31:41.580663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.486 [2024-10-07 11:31:41.580696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.486 [2024-10-07 11:31:41.580826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.486 [2024-10-07 11:31:41.580851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.486 [2024-10-07 11:31:41.580867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.486 [2024-10-07 11:31:41.580884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.486 [2024-10-07 11:31:41.580898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.486 [2024-10-07 11:31:41.580912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.486 [2024-10-07 11:31:41.581027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.486 [2024-10-07 11:31:41.581066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.486 [2024-10-07 11:31:41.591484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.486 [2024-10-07 11:31:41.591535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.486 [2024-10-07 11:31:41.591639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.486 [2024-10-07 11:31:41.591670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.486 [2024-10-07 11:31:41.591688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.486 [2024-10-07 11:31:41.591736] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.486 [2024-10-07 11:31:41.591759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.486 [2024-10-07 11:31:41.591775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.486 [2024-10-07 11:31:41.591807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.486 [2024-10-07 11:31:41.591830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.486 [2024-10-07 11:31:41.591856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.486 [2024-10-07 11:31:41.591874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.486 [2024-10-07 11:31:41.591888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.486 [2024-10-07 11:31:41.591904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.486 [2024-10-07 11:31:41.591919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.486 [2024-10-07 11:31:41.591932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.486 [2024-10-07 11:31:41.591961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.486 [2024-10-07 11:31:41.591979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.486 [2024-10-07 11:31:41.601604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.486 [2024-10-07 11:31:41.601677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.486 [2024-10-07 11:31:41.601754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.486 [2024-10-07 11:31:41.601783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.486 [2024-10-07 11:31:41.601800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.486 [2024-10-07 11:31:41.601863] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.486 [2024-10-07 11:31:41.601889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.486 [2024-10-07 11:31:41.601905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.486 [2024-10-07 11:31:41.601924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.486 [2024-10-07 11:31:41.601955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.486 [2024-10-07 11:31:41.601976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.486 [2024-10-07 11:31:41.601990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.486 [2024-10-07 11:31:41.602021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.486 [2024-10-07 11:31:41.602054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.486 [2024-10-07 11:31:41.602072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.486 [2024-10-07 11:31:41.602086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.486 [2024-10-07 11:31:41.602100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.486 [2024-10-07 11:31:41.603302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.486 [2024-10-07 11:31:41.612288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.486 [2024-10-07 11:31:41.612352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.486 [2024-10-07 11:31:41.612456] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.486 [2024-10-07 11:31:41.612487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.486 [2024-10-07 11:31:41.612504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.486 [2024-10-07 11:31:41.612553] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.486 [2024-10-07 11:31:41.612576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.486 [2024-10-07 11:31:41.612592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.486 [2024-10-07 11:31:41.612624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.486 [2024-10-07 11:31:41.612647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.486 [2024-10-07 11:31:41.612674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.486 [2024-10-07 11:31:41.612692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.486 [2024-10-07 11:31:41.612706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.486 [2024-10-07 11:31:41.612723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.486 [2024-10-07 11:31:41.612737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.486 [2024-10-07 11:31:41.612750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.486 [2024-10-07 11:31:41.612780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.486 [2024-10-07 11:31:41.612797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.486 [2024-10-07 11:31:41.622430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.486 [2024-10-07 11:31:41.622504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.486 [2024-10-07 11:31:41.622583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.486 [2024-10-07 11:31:41.622612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.486 [2024-10-07 11:31:41.622629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.486 [2024-10-07 11:31:41.622692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.486 [2024-10-07 11:31:41.622719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.486 [2024-10-07 11:31:41.622752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.486 [2024-10-07 11:31:41.622772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.486 [2024-10-07 11:31:41.622961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.486 [2024-10-07 11:31:41.622991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.486 [2024-10-07 11:31:41.623005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.486 [2024-10-07 11:31:41.623019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.486 [2024-10-07 11:31:41.623153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.486 [2024-10-07 11:31:41.623178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.486 [2024-10-07 11:31:41.623192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.486 [2024-10-07 11:31:41.623206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.486 [2024-10-07 11:31:41.623336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.486 [2024-10-07 11:31:41.633788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.487 [2024-10-07 11:31:41.633838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.487 [2024-10-07 11:31:41.633932] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.487 [2024-10-07 11:31:41.633963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.487 [2024-10-07 11:31:41.633980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.487 [2024-10-07 11:31:41.634028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.487 [2024-10-07 11:31:41.634052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.487 [2024-10-07 11:31:41.634067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.487 [2024-10-07 11:31:41.634099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.487 [2024-10-07 11:31:41.634122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.487 [2024-10-07 11:31:41.634149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.487 [2024-10-07 11:31:41.634167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.487 [2024-10-07 11:31:41.634183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.487 [2024-10-07 11:31:41.634199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.487 [2024-10-07 11:31:41.634213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.487 [2024-10-07 11:31:41.634226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.487 [2024-10-07 11:31:41.634256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.487 [2024-10-07 11:31:41.634273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.487 [2024-10-07 11:31:41.643912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.487 [2024-10-07 11:31:41.644006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.487 [2024-10-07 11:31:41.644086] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.487 [2024-10-07 11:31:41.644115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.487 [2024-10-07 11:31:41.644132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.487 [2024-10-07 11:31:41.644196] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.487 [2024-10-07 11:31:41.644223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.487 [2024-10-07 11:31:41.644239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.487 [2024-10-07 11:31:41.644257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.487 [2024-10-07 11:31:41.644289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.487 [2024-10-07 11:31:41.644310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.487 [2024-10-07 11:31:41.644341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.487 [2024-10-07 11:31:41.644356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.487 [2024-10-07 11:31:41.644388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.487 [2024-10-07 11:31:41.644407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.487 [2024-10-07 11:31:41.644421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.487 [2024-10-07 11:31:41.644434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.487 [2024-10-07 11:31:41.645621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.487 [2024-10-07 11:31:41.654620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.487 [2024-10-07 11:31:41.654678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.487 [2024-10-07 11:31:41.654778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.487 [2024-10-07 11:31:41.654808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.487 [2024-10-07 11:31:41.654825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.487 [2024-10-07 11:31:41.654872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.487 [2024-10-07 11:31:41.654895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.487 [2024-10-07 11:31:41.654911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.487 [2024-10-07 11:31:41.654957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.487 [2024-10-07 11:31:41.654984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.487 [2024-10-07 11:31:41.655011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.487 [2024-10-07 11:31:41.655028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.487 [2024-10-07 11:31:41.655043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.487 [2024-10-07 11:31:41.655074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.487 [2024-10-07 11:31:41.655091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.487 [2024-10-07 11:31:41.655104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.487 [2024-10-07 11:31:41.655134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.487 [2024-10-07 11:31:41.655152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.487 [2024-10-07 11:31:41.664749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.487 [2024-10-07 11:31:41.664823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.487 [2024-10-07 11:31:41.664905] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.487 [2024-10-07 11:31:41.664934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.487 [2024-10-07 11:31:41.664950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.487 [2024-10-07 11:31:41.665014] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.487 [2024-10-07 11:31:41.665040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.487 [2024-10-07 11:31:41.665057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.487 [2024-10-07 11:31:41.665075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.487 [2024-10-07 11:31:41.665263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.487 [2024-10-07 11:31:41.665292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.487 [2024-10-07 11:31:41.665308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.487 [2024-10-07 11:31:41.665339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.487 [2024-10-07 11:31:41.665475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.487 [2024-10-07 11:31:41.665500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.487 [2024-10-07 11:31:41.665515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.487 [2024-10-07 11:31:41.665529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.487 [2024-10-07 11:31:41.665653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.487 [2024-10-07 11:31:41.676131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.487 [2024-10-07 11:31:41.676248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.487 [2024-10-07 11:31:41.676344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.487 [2024-10-07 11:31:41.676374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.487 [2024-10-07 11:31:41.676391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.487 [2024-10-07 11:31:41.676457] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.487 [2024-10-07 11:31:41.676484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.487 [2024-10-07 11:31:41.676500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.487 [2024-10-07 11:31:41.676538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.487 [2024-10-07 11:31:41.676573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.487 [2024-10-07 11:31:41.676595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.487 [2024-10-07 11:31:41.676609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.487 [2024-10-07 11:31:41.676622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.487 [2024-10-07 11:31:41.676671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.487 [2024-10-07 11:31:41.676692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.487 [2024-10-07 11:31:41.676706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.487 [2024-10-07 11:31:41.676720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.487 [2024-10-07 11:31:41.676747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.487 [2024-10-07 11:31:41.686225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.487 [2024-10-07 11:31:41.686363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.487 [2024-10-07 11:31:41.686396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.487 [2024-10-07 11:31:41.686414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.487 [2024-10-07 11:31:41.686458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.487 [2024-10-07 11:31:41.686498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.487 [2024-10-07 11:31:41.686528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.487 [2024-10-07 11:31:41.686545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.487 [2024-10-07 11:31:41.686559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.487 [2024-10-07 11:31:41.686587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.487 [2024-10-07 11:31:41.686650] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.487 [2024-10-07 11:31:41.686675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.487 [2024-10-07 11:31:41.686691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.487 [2024-10-07 11:31:41.686722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.487 [2024-10-07 11:31:41.686753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.487 [2024-10-07 11:31:41.686771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.487 [2024-10-07 11:31:41.686785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.487 [2024-10-07 11:31:41.687974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.487 [2024-10-07 11:31:41.696964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.487 [2024-10-07 11:31:41.697016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.487 [2024-10-07 11:31:41.697135] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.487 [2024-10-07 11:31:41.697167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.487 [2024-10-07 11:31:41.697185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.487 [2024-10-07 11:31:41.697233] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.487 [2024-10-07 11:31:41.697257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.487 [2024-10-07 11:31:41.697272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.487 [2024-10-07 11:31:41.697304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.487 [2024-10-07 11:31:41.697347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.487 [2024-10-07 11:31:41.697377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.487 [2024-10-07 11:31:41.697397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.487 [2024-10-07 11:31:41.697411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.487 [2024-10-07 11:31:41.697427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.487 [2024-10-07 11:31:41.697442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.487 [2024-10-07 11:31:41.697455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.487 [2024-10-07 11:31:41.697484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.487 [2024-10-07 11:31:41.697502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.487 [2024-10-07 11:31:41.707108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.487 [2024-10-07 11:31:41.707158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.487 [2024-10-07 11:31:41.707250] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.487 [2024-10-07 11:31:41.707282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.487 [2024-10-07 11:31:41.707299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.487 [2024-10-07 11:31:41.707364] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.487 [2024-10-07 11:31:41.707389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.487 [2024-10-07 11:31:41.707406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.487 [2024-10-07 11:31:41.707594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.487 [2024-10-07 11:31:41.707626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.487 [2024-10-07 11:31:41.707756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.487 [2024-10-07 11:31:41.707781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.487 [2024-10-07 11:31:41.707795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.487 [2024-10-07 11:31:41.707813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.487 [2024-10-07 11:31:41.707844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.487 [2024-10-07 11:31:41.707859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.487 [2024-10-07 11:31:41.707977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.487 [2024-10-07 11:31:41.708001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.487 [2024-10-07 11:31:41.718559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.487 [2024-10-07 11:31:41.718613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.487 [2024-10-07 11:31:41.718717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.487 [2024-10-07 11:31:41.718749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.487 [2024-10-07 11:31:41.718767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.487 [2024-10-07 11:31:41.718816] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.487 [2024-10-07 11:31:41.718839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.487 [2024-10-07 11:31:41.718855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.487 [2024-10-07 11:31:41.718888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.487 [2024-10-07 11:31:41.718911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.488 [2024-10-07 11:31:41.718938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.488 [2024-10-07 11:31:41.718956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.488 [2024-10-07 11:31:41.718970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.488 [2024-10-07 11:31:41.718986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.488 [2024-10-07 11:31:41.719001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.488 [2024-10-07 11:31:41.719014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.488 [2024-10-07 11:31:41.719043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.488 [2024-10-07 11:31:41.719061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.488 [2024-10-07 11:31:41.728698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.488 [2024-10-07 11:31:41.728772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.488 [2024-10-07 11:31:41.728852] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.488 [2024-10-07 11:31:41.728881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.488 [2024-10-07 11:31:41.728898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.488 [2024-10-07 11:31:41.728962] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.488 [2024-10-07 11:31:41.728988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.488 [2024-10-07 11:31:41.729005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.488 [2024-10-07 11:31:41.729040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.488 [2024-10-07 11:31:41.729092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.488 [2024-10-07 11:31:41.729118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.488 [2024-10-07 11:31:41.729133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.488 [2024-10-07 11:31:41.729146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.488 [2024-10-07 11:31:41.730363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.488 [2024-10-07 11:31:41.730403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.488 [2024-10-07 11:31:41.730420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.488 [2024-10-07 11:31:41.730435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.488 [2024-10-07 11:31:41.730654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.488 [2024-10-07 11:31:41.739218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.488 [2024-10-07 11:31:41.739277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.488 [2024-10-07 11:31:41.739395] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.488 [2024-10-07 11:31:41.739427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.488 [2024-10-07 11:31:41.739445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.488 [2024-10-07 11:31:41.739493] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.488 [2024-10-07 11:31:41.739516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.488 [2024-10-07 11:31:41.739532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.488 [2024-10-07 11:31:41.739564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.488 [2024-10-07 11:31:41.739589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.488 [2024-10-07 11:31:41.739616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.488 [2024-10-07 11:31:41.739633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.488 [2024-10-07 11:31:41.739647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.488 [2024-10-07 11:31:41.739664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.488 [2024-10-07 11:31:41.739678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.488 [2024-10-07 11:31:41.739691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.488 [2024-10-07 11:31:41.739721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.488 [2024-10-07 11:31:41.739739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.488 [2024-10-07 11:31:41.749372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.488 [2024-10-07 11:31:41.749423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.488 [2024-10-07 11:31:41.749516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.488 [2024-10-07 11:31:41.749560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.488 [2024-10-07 11:31:41.749580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.488 [2024-10-07 11:31:41.749631] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.488 [2024-10-07 11:31:41.749655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.488 [2024-10-07 11:31:41.749670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.488 [2024-10-07 11:31:41.749858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.488 [2024-10-07 11:31:41.749890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.488 [2024-10-07 11:31:41.750020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.488 [2024-10-07 11:31:41.750046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.488 [2024-10-07 11:31:41.750061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.488 [2024-10-07 11:31:41.750077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.488 [2024-10-07 11:31:41.750092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.488 [2024-10-07 11:31:41.750105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.488 [2024-10-07 11:31:41.750221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.488 [2024-10-07 11:31:41.750245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.488 [2024-10-07 11:31:41.760723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.488 [2024-10-07 11:31:41.760774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.488 [2024-10-07 11:31:41.760868] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.488 [2024-10-07 11:31:41.760898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.488 [2024-10-07 11:31:41.760916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.488 [2024-10-07 11:31:41.760963] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.488 [2024-10-07 11:31:41.760986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.488 [2024-10-07 11:31:41.761002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.488 [2024-10-07 11:31:41.761034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.488 [2024-10-07 11:31:41.761058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.488 [2024-10-07 11:31:41.761084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.488 [2024-10-07 11:31:41.761102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.488 [2024-10-07 11:31:41.761116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.488 [2024-10-07 11:31:41.761132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.488 [2024-10-07 11:31:41.761147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.488 [2024-10-07 11:31:41.761175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.488 [2024-10-07 11:31:41.761207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.488 [2024-10-07 11:31:41.761225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.488 [2024-10-07 11:31:41.770853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.488 [2024-10-07 11:31:41.770905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.488 [2024-10-07 11:31:41.770997] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.488 [2024-10-07 11:31:41.771028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.488 [2024-10-07 11:31:41.771045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.488 [2024-10-07 11:31:41.771093] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.488 [2024-10-07 11:31:41.771116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.488 [2024-10-07 11:31:41.771132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.488 [2024-10-07 11:31:41.771164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.488 [2024-10-07 11:31:41.771188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.488 [2024-10-07 11:31:41.771214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.488 [2024-10-07 11:31:41.771232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.488 [2024-10-07 11:31:41.771247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.488 [2024-10-07 11:31:41.771263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.488 [2024-10-07 11:31:41.771277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.488 [2024-10-07 11:31:41.771291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.488 [2024-10-07 11:31:41.772506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.488 [2024-10-07 11:31:41.772545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.488 [2024-10-07 11:31:41.781371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.488 [2024-10-07 11:31:41.781422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.488 [2024-10-07 11:31:41.781547] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.488 [2024-10-07 11:31:41.781595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.488 [2024-10-07 11:31:41.781625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.488 [2024-10-07 11:31:41.781705] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.488 [2024-10-07 11:31:41.781739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.488 [2024-10-07 11:31:41.781766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.488 [2024-10-07 11:31:41.781814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.488 [2024-10-07 11:31:41.781868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.488 [2024-10-07 11:31:41.783168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.488 [2024-10-07 11:31:41.783213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.488 [2024-10-07 11:31:41.783232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.488 [2024-10-07 11:31:41.783250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.488 [2024-10-07 11:31:41.783265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.488 [2024-10-07 11:31:41.783278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.488 [2024-10-07 11:31:41.783527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.488 [2024-10-07 11:31:41.783554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.488 [2024-10-07 11:31:41.791502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.488 [2024-10-07 11:31:41.791550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.488 [2024-10-07 11:31:41.791644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.488 [2024-10-07 11:31:41.791675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.488 [2024-10-07 11:31:41.791692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.488 [2024-10-07 11:31:41.791740] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.488 [2024-10-07 11:31:41.791763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.488 [2024-10-07 11:31:41.791779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.488 [2024-10-07 11:31:41.791966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.488 [2024-10-07 11:31:41.791998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.488 [2024-10-07 11:31:41.792127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.488 [2024-10-07 11:31:41.792152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.488 [2024-10-07 11:31:41.792167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.488 [2024-10-07 11:31:41.792184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.488 [2024-10-07 11:31:41.792198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.488 [2024-10-07 11:31:41.792212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.488 [2024-10-07 11:31:41.792342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.488 [2024-10-07 11:31:41.792367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.488 [2024-10-07 11:31:41.802817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.488 [2024-10-07 11:31:41.802868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.488 [2024-10-07 11:31:41.802961] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.488 [2024-10-07 11:31:41.802991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.488 [2024-10-07 11:31:41.803025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.488 [2024-10-07 11:31:41.803079] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.488 [2024-10-07 11:31:41.803103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.488 [2024-10-07 11:31:41.803119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.489 [2024-10-07 11:31:41.803152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.489 [2024-10-07 11:31:41.803176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.489 [2024-10-07 11:31:41.803203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.489 [2024-10-07 11:31:41.803221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.489 [2024-10-07 11:31:41.803235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.489 [2024-10-07 11:31:41.803252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.489 [2024-10-07 11:31:41.803266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.489 [2024-10-07 11:31:41.803279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.489 [2024-10-07 11:31:41.803308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.489 [2024-10-07 11:31:41.803343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.489 [2024-10-07 11:31:41.812945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.489 [2024-10-07 11:31:41.813018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.489 [2024-10-07 11:31:41.813096] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.489 [2024-10-07 11:31:41.813124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.489 [2024-10-07 11:31:41.813142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.489 [2024-10-07 11:31:41.813205] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.489 [2024-10-07 11:31:41.813232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.489 [2024-10-07 11:31:41.813248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.489 [2024-10-07 11:31:41.813267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.489 [2024-10-07 11:31:41.813299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.489 [2024-10-07 11:31:41.813334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.489 [2024-10-07 11:31:41.813351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.489 [2024-10-07 11:31:41.813365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.489 [2024-10-07 11:31:41.814563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.489 [2024-10-07 11:31:41.814602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.489 [2024-10-07 11:31:41.814620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.489 [2024-10-07 11:31:41.814649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.489 [2024-10-07 11:31:41.814871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.489 [2024-10-07 11:31:41.823497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.489 [2024-10-07 11:31:41.823547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.489 [2024-10-07 11:31:41.823651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.489 [2024-10-07 11:31:41.823681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.489 [2024-10-07 11:31:41.823699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.489 [2024-10-07 11:31:41.823747] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.489 [2024-10-07 11:31:41.823770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.489 [2024-10-07 11:31:41.823786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.489 [2024-10-07 11:31:41.823818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.489 [2024-10-07 11:31:41.823841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.489 [2024-10-07 11:31:41.823886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.489 [2024-10-07 11:31:41.823907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.489 [2024-10-07 11:31:41.823921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.489 [2024-10-07 11:31:41.823938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.489 [2024-10-07 11:31:41.823952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.489 [2024-10-07 11:31:41.823965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.489 [2024-10-07 11:31:41.823994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.489 [2024-10-07 11:31:41.824012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.489 [2024-10-07 11:31:41.833619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.489 [2024-10-07 11:31:41.833691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.489 [2024-10-07 11:31:41.833770] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.489 [2024-10-07 11:31:41.833799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.489 [2024-10-07 11:31:41.833816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.489 [2024-10-07 11:31:41.834042] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.489 [2024-10-07 11:31:41.834073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.489 [2024-10-07 11:31:41.834090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.489 [2024-10-07 11:31:41.834109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.489 [2024-10-07 11:31:41.834241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.489 [2024-10-07 11:31:41.834269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.489 [2024-10-07 11:31:41.834328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.489 [2024-10-07 11:31:41.834348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.489 [2024-10-07 11:31:41.834471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.489 [2024-10-07 11:31:41.834497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.489 [2024-10-07 11:31:41.834511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.489 [2024-10-07 11:31:41.834524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.489 [2024-10-07 11:31:41.834581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.489 [2024-10-07 11:31:41.844811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.489 [2024-10-07 11:31:41.844863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.489 [2024-10-07 11:31:41.844957] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.489 [2024-10-07 11:31:41.844988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.489 [2024-10-07 11:31:41.845005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.489 [2024-10-07 11:31:41.845061] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.489 [2024-10-07 11:31:41.845084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.489 [2024-10-07 11:31:41.845099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.489 [2024-10-07 11:31:41.845131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.489 [2024-10-07 11:31:41.845155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.489 [2024-10-07 11:31:41.845182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.489 [2024-10-07 11:31:41.845200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.489 [2024-10-07 11:31:41.845214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.489 [2024-10-07 11:31:41.845231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.489 [2024-10-07 11:31:41.845245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.489 [2024-10-07 11:31:41.845258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.489 [2024-10-07 11:31:41.845287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.489 [2024-10-07 11:31:41.845304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.489 [2024-10-07 11:31:41.854937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.489 [2024-10-07 11:31:41.855009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.489 [2024-10-07 11:31:41.855089] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.489 [2024-10-07 11:31:41.855117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.489 [2024-10-07 11:31:41.855134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.489 [2024-10-07 11:31:41.855229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.489 [2024-10-07 11:31:41.855256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.489 [2024-10-07 11:31:41.855273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.489 [2024-10-07 11:31:41.855291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.489 [2024-10-07 11:31:41.855340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.489 [2024-10-07 11:31:41.855364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.489 [2024-10-07 11:31:41.855379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.489 [2024-10-07 11:31:41.855392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.489 [2024-10-07 11:31:41.855423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.489 [2024-10-07 11:31:41.855441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.489 [2024-10-07 11:31:41.855455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.489 [2024-10-07 11:31:41.855468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.489 [2024-10-07 11:31:41.856679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.489 [2024-10-07 11:31:41.865033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.489 [2024-10-07 11:31:41.865157] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.489 [2024-10-07 11:31:41.865189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.489 [2024-10-07 11:31:41.865207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.489 [2024-10-07 11:31:41.865252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.489 [2024-10-07 11:31:41.865977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.489 [2024-10-07 11:31:41.866030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.489 [2024-10-07 11:31:41.866049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.489 [2024-10-07 11:31:41.866063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.489 [2024-10-07 11:31:41.866682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.489 [2024-10-07 11:31:41.866765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.489 [2024-10-07 11:31:41.866792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.489 [2024-10-07 11:31:41.866809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.489 [2024-10-07 11:31:41.867055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.489 [2024-10-07 11:31:41.867127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.489 [2024-10-07 11:31:41.867151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.489 [2024-10-07 11:31:41.867166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.489 [2024-10-07 11:31:41.867212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.489 [2024-10-07 11:31:41.875126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.489 [2024-10-07 11:31:41.875239] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.489 [2024-10-07 11:31:41.875271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.489 [2024-10-07 11:31:41.875288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.489 [2024-10-07 11:31:41.875337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.489 [2024-10-07 11:31:41.875380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.489 [2024-10-07 11:31:41.875399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.489 [2024-10-07 11:31:41.875413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.489 [2024-10-07 11:31:41.875443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.489 [2024-10-07 11:31:41.876865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.489 [2024-10-07 11:31:41.876976] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.489 [2024-10-07 11:31:41.877007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.489 [2024-10-07 11:31:41.877025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.489 [2024-10-07 11:31:41.877056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.489 [2024-10-07 11:31:41.877089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.489 [2024-10-07 11:31:41.877107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.489 [2024-10-07 11:31:41.877121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.489 [2024-10-07 11:31:41.877151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.489 [2024-10-07 11:31:41.885219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.489 [2024-10-07 11:31:41.885347] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.489 [2024-10-07 11:31:41.885379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.489 [2024-10-07 11:31:41.885396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.489 [2024-10-07 11:31:41.885429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.489 [2024-10-07 11:31:41.885461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.489 [2024-10-07 11:31:41.885480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.489 [2024-10-07 11:31:41.885494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.489 [2024-10-07 11:31:41.885525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.489 [2024-10-07 11:31:41.886954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.489 [2024-10-07 11:31:41.887066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.489 [2024-10-07 11:31:41.887097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.489 [2024-10-07 11:31:41.887131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.489 [2024-10-07 11:31:41.887165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.489 [2024-10-07 11:31:41.887197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.489 [2024-10-07 11:31:41.887215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.489 [2024-10-07 11:31:41.887229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.489 [2024-10-07 11:31:41.887273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.490 [2024-10-07 11:31:41.895352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.490 [2024-10-07 11:31:41.895467] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.490 [2024-10-07 11:31:41.895499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.490 [2024-10-07 11:31:41.895516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.490 [2024-10-07 11:31:41.895548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.490 [2024-10-07 11:31:41.895579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.490 [2024-10-07 11:31:41.895597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.490 [2024-10-07 11:31:41.895611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.490 [2024-10-07 11:31:41.895641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.490 [2024-10-07 11:31:41.898691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.490 [2024-10-07 11:31:41.899772] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.490 [2024-10-07 11:31:41.899817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.490 [2024-10-07 11:31:41.899837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.490 [2024-10-07 11:31:41.900188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.490 [2024-10-07 11:31:41.900279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.490 [2024-10-07 11:31:41.900305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.490 [2024-10-07 11:31:41.900335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.490 [2024-10-07 11:31:41.900380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.490 [2024-10-07 11:31:41.905700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.490 [2024-10-07 11:31:41.905813] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.490 [2024-10-07 11:31:41.905843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.490 [2024-10-07 11:31:41.905861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.490 [2024-10-07 11:31:41.905893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.490 [2024-10-07 11:31:41.905924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.490 [2024-10-07 11:31:41.905960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.490 [2024-10-07 11:31:41.905975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.490 [2024-10-07 11:31:41.906007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.490 [2024-10-07 11:31:41.908975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.490 [2024-10-07 11:31:41.909085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.490 [2024-10-07 11:31:41.909116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.490 [2024-10-07 11:31:41.909133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.490 [2024-10-07 11:31:41.909164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.490 [2024-10-07 11:31:41.909196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.490 [2024-10-07 11:31:41.909214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.490 [2024-10-07 11:31:41.909229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.490 [2024-10-07 11:31:41.910130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.490 [2024-10-07 11:31:41.916622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.490 [2024-10-07 11:31:41.916735] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.490 [2024-10-07 11:31:41.916766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.490 [2024-10-07 11:31:41.916784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.490 [2024-10-07 11:31:41.916815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.490 [2024-10-07 11:31:41.916848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.490 [2024-10-07 11:31:41.916866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.490 [2024-10-07 11:31:41.916880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.490 [2024-10-07 11:31:41.916909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.490 [2024-10-07 11:31:41.919831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.490 [2024-10-07 11:31:41.919942] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.490 [2024-10-07 11:31:41.919973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.490 [2024-10-07 11:31:41.919990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.490 [2024-10-07 11:31:41.920022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.490 [2024-10-07 11:31:41.920054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.490 [2024-10-07 11:31:41.920072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.490 [2024-10-07 11:31:41.920087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.490 [2024-10-07 11:31:41.920116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.490 [2024-10-07 11:31:41.928492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.490 [2024-10-07 11:31:41.928620] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.490 [2024-10-07 11:31:41.928652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.490 [2024-10-07 11:31:41.928670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.490 [2024-10-07 11:31:41.928702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.490 [2024-10-07 11:31:41.928734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.490 [2024-10-07 11:31:41.928752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.490 [2024-10-07 11:31:41.928766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.490 [2024-10-07 11:31:41.928796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.490 [2024-10-07 11:31:41.930725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.490 [2024-10-07 11:31:41.930835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.490 [2024-10-07 11:31:41.930866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.490 [2024-10-07 11:31:41.930883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.490 [2024-10-07 11:31:41.930915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.490 [2024-10-07 11:31:41.930948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.490 [2024-10-07 11:31:41.930966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.490 [2024-10-07 11:31:41.930980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.490 [2024-10-07 11:31:41.931010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.490 [2024-10-07 11:31:41.938597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.490 [2024-10-07 11:31:41.938709] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.490 [2024-10-07 11:31:41.938740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.490 [2024-10-07 11:31:41.938758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.490 [2024-10-07 11:31:41.938790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.490 [2024-10-07 11:31:41.938822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.490 [2024-10-07 11:31:41.938840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.490 [2024-10-07 11:31:41.938855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.490 [2024-10-07 11:31:41.938885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.490 [2024-10-07 11:31:41.942527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.490 [2024-10-07 11:31:41.942641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.490 [2024-10-07 11:31:41.942672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.490 [2024-10-07 11:31:41.942689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.490 [2024-10-07 11:31:41.942744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.490 [2024-10-07 11:31:41.942777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.490 [2024-10-07 11:31:41.942796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.490 [2024-10-07 11:31:41.942810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.490 [2024-10-07 11:31:41.942840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.490 [2024-10-07 11:31:41.949152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.490 [2024-10-07 11:31:41.949276] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.490 [2024-10-07 11:31:41.949308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.490 [2024-10-07 11:31:41.949343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.490 [2024-10-07 11:31:41.949377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.490 [2024-10-07 11:31:41.949410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.490 [2024-10-07 11:31:41.949428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.490 [2024-10-07 11:31:41.949442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.490 [2024-10-07 11:31:41.949472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.490 [2024-10-07 11:31:41.952620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.490 [2024-10-07 11:31:41.952729] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.490 [2024-10-07 11:31:41.952760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.490 [2024-10-07 11:31:41.952778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.490 [2024-10-07 11:31:41.952810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.490 [2024-10-07 11:31:41.952842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.490 [2024-10-07 11:31:41.952861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.490 [2024-10-07 11:31:41.952875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.490 [2024-10-07 11:31:41.952905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.490 [2024-10-07 11:31:41.959242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.490 [2024-10-07 11:31:41.959367] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.490 [2024-10-07 11:31:41.959399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.490 [2024-10-07 11:31:41.959417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.490 [2024-10-07 11:31:41.959449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.490 [2024-10-07 11:31:41.959482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.490 [2024-10-07 11:31:41.959500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.490 [2024-10-07 11:31:41.959534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.490 [2024-10-07 11:31:41.959721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.490 [2024-10-07 11:31:41.963222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.490 [2024-10-07 11:31:41.963354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.490 [2024-10-07 11:31:41.963386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.490 [2024-10-07 11:31:41.963404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.490 [2024-10-07 11:31:41.963438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.490 [2024-10-07 11:31:41.963470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.490 [2024-10-07 11:31:41.963488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.490 [2024-10-07 11:31:41.963502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.490 [2024-10-07 11:31:41.963532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.490 [2024-10-07 11:31:41.970682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.490 [2024-10-07 11:31:41.970798] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.490 [2024-10-07 11:31:41.970829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.490 [2024-10-07 11:31:41.970847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.490 [2024-10-07 11:31:41.970879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.490 [2024-10-07 11:31:41.970911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.490 [2024-10-07 11:31:41.970929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.490 [2024-10-07 11:31:41.970943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.490 [2024-10-07 11:31:41.970974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.490 [2024-10-07 11:31:41.973347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.490 [2024-10-07 11:31:41.973453] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.490 [2024-10-07 11:31:41.973484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.491 [2024-10-07 11:31:41.973501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.491 [2024-10-07 11:31:41.973532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.491 [2024-10-07 11:31:41.973564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.491 [2024-10-07 11:31:41.973582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.491 [2024-10-07 11:31:41.973597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.491 [2024-10-07 11:31:41.973626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.491 [2024-10-07 11:31:41.980776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.491 [2024-10-07 11:31:41.980890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.491 [2024-10-07 11:31:41.980937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.491 [2024-10-07 11:31:41.980957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.491 [2024-10-07 11:31:41.980989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.491 [2024-10-07 11:31:41.981022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.491 [2024-10-07 11:31:41.981040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.491 [2024-10-07 11:31:41.981054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.491 [2024-10-07 11:31:41.981084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.491 [2024-10-07 11:31:41.984761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.491 [2024-10-07 11:31:41.984876] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.491 [2024-10-07 11:31:41.984908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.491 [2024-10-07 11:31:41.984925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.491 [2024-10-07 11:31:41.984957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.491 [2024-10-07 11:31:41.984989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.491 [2024-10-07 11:31:41.985007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.491 [2024-10-07 11:31:41.985022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.491 [2024-10-07 11:31:41.985052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.491 [2024-10-07 11:31:41.991358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.491 [2024-10-07 11:31:41.991487] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.491 [2024-10-07 11:31:41.991518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.491 [2024-10-07 11:31:41.991537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.491 [2024-10-07 11:31:41.991569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.491 [2024-10-07 11:31:41.991601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.491 [2024-10-07 11:31:41.991619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.491 [2024-10-07 11:31:41.991633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.491 [2024-10-07 11:31:41.991663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.491 [2024-10-07 11:31:41.994852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.491 [2024-10-07 11:31:41.994961] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.491 [2024-10-07 11:31:41.994991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.491 [2024-10-07 11:31:41.995009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.491 [2024-10-07 11:31:41.995040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.491 [2024-10-07 11:31:41.995091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.491 [2024-10-07 11:31:41.995111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.491 [2024-10-07 11:31:41.995126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.491 [2024-10-07 11:31:41.995172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.491 [2024-10-07 11:31:42.001454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.491 [2024-10-07 11:31:42.001566] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.491 [2024-10-07 11:31:42.001598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.491 [2024-10-07 11:31:42.001616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.491 [2024-10-07 11:31:42.001648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.491 [2024-10-07 11:31:42.001680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.491 [2024-10-07 11:31:42.001698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.491 [2024-10-07 11:31:42.001713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.491 [2024-10-07 11:31:42.001743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.491 [2024-10-07 11:31:42.004941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.491 [2024-10-07 11:31:42.005052] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.491 [2024-10-07 11:31:42.005083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.491 [2024-10-07 11:31:42.005101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.491 [2024-10-07 11:31:42.005684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.491 [2024-10-07 11:31:42.005871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.491 [2024-10-07 11:31:42.005907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.491 [2024-10-07 11:31:42.005924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.491 [2024-10-07 11:31:42.006031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.491 [2024-10-07 11:31:42.013301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.491 [2024-10-07 11:31:42.013427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.491 [2024-10-07 11:31:42.013459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.491 [2024-10-07 11:31:42.013477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.491 [2024-10-07 11:31:42.013509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.491 [2024-10-07 11:31:42.013542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.491 [2024-10-07 11:31:42.013559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.491 [2024-10-07 11:31:42.013574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.491 [2024-10-07 11:31:42.013622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.491 [2024-10-07 11:31:42.015496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.491 [2024-10-07 11:31:42.015608] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.491 [2024-10-07 11:31:42.015639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.491 [2024-10-07 11:31:42.015657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.491 [2024-10-07 11:31:42.015689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.491 [2024-10-07 11:31:42.015721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.491 [2024-10-07 11:31:42.015739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.491 [2024-10-07 11:31:42.015754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.491 [2024-10-07 11:31:42.015783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.491 [2024-10-07 11:31:42.023403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.491 [2024-10-07 11:31:42.023516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.491 [2024-10-07 11:31:42.023547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.491 [2024-10-07 11:31:42.023565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.491 [2024-10-07 11:31:42.023597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.491 [2024-10-07 11:31:42.023629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.491 [2024-10-07 11:31:42.023648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.491 [2024-10-07 11:31:42.023662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.491 [2024-10-07 11:31:42.023692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.491 [2024-10-07 11:31:42.027311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.491 [2024-10-07 11:31:42.027473] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.491 [2024-10-07 11:31:42.027506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.491 [2024-10-07 11:31:42.027523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.491 [2024-10-07 11:31:42.027556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.491 [2024-10-07 11:31:42.027588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.491 [2024-10-07 11:31:42.027607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.491 [2024-10-07 11:31:42.027621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.491 [2024-10-07 11:31:42.027651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.491 [2024-10-07 11:31:42.034055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.491 [2024-10-07 11:31:42.034175] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.491 [2024-10-07 11:31:42.034205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.491 [2024-10-07 11:31:42.034241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.491 [2024-10-07 11:31:42.034275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.491 [2024-10-07 11:31:42.034338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.491 [2024-10-07 11:31:42.034361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.491 [2024-10-07 11:31:42.034375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.491 [2024-10-07 11:31:42.034406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.491 [2024-10-07 11:31:42.037413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.491 [2024-10-07 11:31:42.037521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.491 [2024-10-07 11:31:42.037552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.491 [2024-10-07 11:31:42.037570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.491 [2024-10-07 11:31:42.037602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.491 [2024-10-07 11:31:42.037634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.491 [2024-10-07 11:31:42.037653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.491 [2024-10-07 11:31:42.037667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.491 [2024-10-07 11:31:42.037697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.491 [2024-10-07 11:31:42.044145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.491 [2024-10-07 11:31:42.044259] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.491 [2024-10-07 11:31:42.044290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.491 [2024-10-07 11:31:42.044307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.491 [2024-10-07 11:31:42.044355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.491 [2024-10-07 11:31:42.044387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.491 [2024-10-07 11:31:42.044405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.491 [2024-10-07 11:31:42.044419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.491 [2024-10-07 11:31:42.044602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.491 [2024-10-07 11:31:42.048124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.491 [2024-10-07 11:31:42.048245] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.491 [2024-10-07 11:31:42.048276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.491 [2024-10-07 11:31:42.048294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.491 [2024-10-07 11:31:42.048341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.491 [2024-10-07 11:31:42.048377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.491 [2024-10-07 11:31:42.048411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.491 [2024-10-07 11:31:42.048427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.491 [2024-10-07 11:31:42.048458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.491 [2024-10-07 11:31:42.055546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.491 [2024-10-07 11:31:42.055658] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.491 [2024-10-07 11:31:42.055689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.491 [2024-10-07 11:31:42.055706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.491 [2024-10-07 11:31:42.055739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.491 [2024-10-07 11:31:42.055771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.491 [2024-10-07 11:31:42.055789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.491 [2024-10-07 11:31:42.055804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.491 [2024-10-07 11:31:42.055833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.491 [2024-10-07 11:31:42.058216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.491 [2024-10-07 11:31:42.058348] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.491 [2024-10-07 11:31:42.058381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.491 [2024-10-07 11:31:42.058398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.491 [2024-10-07 11:31:42.058431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.491 [2024-10-07 11:31:42.058464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.491 [2024-10-07 11:31:42.058482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.491 [2024-10-07 11:31:42.058496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.491 [2024-10-07 11:31:42.058680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.491 [2024-10-07 11:31:42.065638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.491 [2024-10-07 11:31:42.065756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.491 [2024-10-07 11:31:42.065788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.491 [2024-10-07 11:31:42.065805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.491 [2024-10-07 11:31:42.065837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.492 [2024-10-07 11:31:42.065869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.492 [2024-10-07 11:31:42.065887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.492 [2024-10-07 11:31:42.065902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.492 [2024-10-07 11:31:42.065932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.492 [2024-10-07 11:31:42.069603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.492 [2024-10-07 11:31:42.069754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.492 [2024-10-07 11:31:42.069787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.492 [2024-10-07 11:31:42.069805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.492 [2024-10-07 11:31:42.069837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.492 [2024-10-07 11:31:42.069872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.492 [2024-10-07 11:31:42.069890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.492 [2024-10-07 11:31:42.069905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.492 [2024-10-07 11:31:42.069935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.492 [2024-10-07 11:31:42.075736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.492 [2024-10-07 11:31:42.075852] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.492 [2024-10-07 11:31:42.075884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.492 [2024-10-07 11:31:42.075901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.492 [2024-10-07 11:31:42.075933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.492 [2024-10-07 11:31:42.075966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.492 [2024-10-07 11:31:42.075984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.492 [2024-10-07 11:31:42.075998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.492 [2024-10-07 11:31:42.076580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.492 [2024-10-07 11:31:42.080116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.492 [2024-10-07 11:31:42.080229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.492 [2024-10-07 11:31:42.080260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.492 [2024-10-07 11:31:42.080277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.492 [2024-10-07 11:31:42.080310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.492 [2024-10-07 11:31:42.080358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.492 [2024-10-07 11:31:42.080377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.492 [2024-10-07 11:31:42.080392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.492 [2024-10-07 11:31:42.080422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.492 [2024-10-07 11:31:42.086478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.492 [2024-10-07 11:31:42.086592] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.492 [2024-10-07 11:31:42.086623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.492 [2024-10-07 11:31:42.086640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.492 [2024-10-07 11:31:42.086690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.492 [2024-10-07 11:31:42.086724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.492 [2024-10-07 11:31:42.086742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.492 [2024-10-07 11:31:42.086756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.492 [2024-10-07 11:31:42.086787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.492 [2024-10-07 11:31:42.090980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.492 [2024-10-07 11:31:42.091100] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.492 [2024-10-07 11:31:42.091132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.492 [2024-10-07 11:31:42.091149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.492 [2024-10-07 11:31:42.091182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.492 [2024-10-07 11:31:42.091215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.492 [2024-10-07 11:31:42.091233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.492 [2024-10-07 11:31:42.091247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.492 [2024-10-07 11:31:42.091277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.492 [2024-10-07 11:31:42.098421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.492 [2024-10-07 11:31:42.098536] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.492 [2024-10-07 11:31:42.098567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.492 [2024-10-07 11:31:42.098585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.492 [2024-10-07 11:31:42.098618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.492 [2024-10-07 11:31:42.098651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.492 [2024-10-07 11:31:42.098669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.492 [2024-10-07 11:31:42.098684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.492 [2024-10-07 11:31:42.098714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.492 [2024-10-07 11:31:42.101070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.492 [2024-10-07 11:31:42.101178] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.492 [2024-10-07 11:31:42.101209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.492 [2024-10-07 11:31:42.101226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.492 [2024-10-07 11:31:42.101258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.492 [2024-10-07 11:31:42.101290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.492 [2024-10-07 11:31:42.101308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.492 [2024-10-07 11:31:42.101355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.492 [2024-10-07 11:31:42.101543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.492 [2024-10-07 11:31:42.108518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.492 [2024-10-07 11:31:42.108632] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.492 [2024-10-07 11:31:42.108663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.492 [2024-10-07 11:31:42.108681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.492 [2024-10-07 11:31:42.108713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.492 [2024-10-07 11:31:42.108745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.492 [2024-10-07 11:31:42.108763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.492 [2024-10-07 11:31:42.108777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.492 [2024-10-07 11:31:42.108807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.492 [2024-10-07 11:31:42.112433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.492 [2024-10-07 11:31:42.112546] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.492 [2024-10-07 11:31:42.112578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.492 [2024-10-07 11:31:42.112595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.492 [2024-10-07 11:31:42.112627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.492 [2024-10-07 11:31:42.112660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.492 [2024-10-07 11:31:42.112678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.492 [2024-10-07 11:31:42.112692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.492 [2024-10-07 11:31:42.112722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.492 [2024-10-07 11:31:42.119017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.492 [2024-10-07 11:31:42.119137] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.492 [2024-10-07 11:31:42.119169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.492 [2024-10-07 11:31:42.119186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.492 [2024-10-07 11:31:42.119233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.492 [2024-10-07 11:31:42.119269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.492 [2024-10-07 11:31:42.119288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.492 [2024-10-07 11:31:42.119302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.492 [2024-10-07 11:31:42.119349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.492 [2024-10-07 11:31:42.122521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.492 [2024-10-07 11:31:42.122632] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.492 [2024-10-07 11:31:42.122678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.492 [2024-10-07 11:31:42.122698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.492 [2024-10-07 11:31:42.122730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.492 [2024-10-07 11:31:42.122762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.492 [2024-10-07 11:31:42.122781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.492 [2024-10-07 11:31:42.122795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.492 [2024-10-07 11:31:42.122824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.492 [2024-10-07 11:31:42.129109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.492 [2024-10-07 11:31:42.129232] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.492 [2024-10-07 11:31:42.129263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.492 [2024-10-07 11:31:42.129281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.492 [2024-10-07 11:31:42.129313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.492 [2024-10-07 11:31:42.129361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.492 [2024-10-07 11:31:42.129380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.492 [2024-10-07 11:31:42.129394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.492 [2024-10-07 11:31:42.129425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.492 [2024-10-07 11:31:42.133151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.492 [2024-10-07 11:31:42.133414] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.492 [2024-10-07 11:31:42.133455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.492 [2024-10-07 11:31:42.133474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.492 [2024-10-07 11:31:42.133583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.492 [2024-10-07 11:31:42.133626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.492 [2024-10-07 11:31:42.133646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.492 [2024-10-07 11:31:42.133661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.492 [2024-10-07 11:31:42.133691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.492 [2024-10-07 11:31:42.140821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.492 [2024-10-07 11:31:42.140936] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.492 [2024-10-07 11:31:42.140967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.492 [2024-10-07 11:31:42.140984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.492 [2024-10-07 11:31:42.141016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.492 [2024-10-07 11:31:42.141066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.492 [2024-10-07 11:31:42.141086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.492 [2024-10-07 11:31:42.141100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.492 [2024-10-07 11:31:42.141131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.492 [2024-10-07 11:31:42.143243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.492 [2024-10-07 11:31:42.143367] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.492 [2024-10-07 11:31:42.143398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.492 [2024-10-07 11:31:42.143415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.492 [2024-10-07 11:31:42.143447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.492 [2024-10-07 11:31:42.143479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.492 [2024-10-07 11:31:42.143497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.492 [2024-10-07 11:31:42.143512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.492 [2024-10-07 11:31:42.143541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.492 [2024-10-07 11:31:42.150916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.492 [2024-10-07 11:31:42.151030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.492 [2024-10-07 11:31:42.151062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.492 [2024-10-07 11:31:42.151079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.492 [2024-10-07 11:31:42.151111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.492 [2024-10-07 11:31:42.151144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.492 [2024-10-07 11:31:42.151162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.492 [2024-10-07 11:31:42.151176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.492 [2024-10-07 11:31:42.151205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.492 [2024-10-07 11:31:42.154832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.492 [2024-10-07 11:31:42.154946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.492 [2024-10-07 11:31:42.154977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.492 [2024-10-07 11:31:42.154995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.492 [2024-10-07 11:31:42.155027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.492 [2024-10-07 11:31:42.155059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.492 [2024-10-07 11:31:42.155077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.493 [2024-10-07 11:31:42.155091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.493 [2024-10-07 11:31:42.155139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.493 [2024-10-07 11:31:42.161501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.493 [2024-10-07 11:31:42.161640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.493 [2024-10-07 11:31:42.161672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.493 [2024-10-07 11:31:42.161690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.493 [2024-10-07 11:31:42.161722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.493 [2024-10-07 11:31:42.161755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.493 [2024-10-07 11:31:42.161773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.493 [2024-10-07 11:31:42.161787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.493 [2024-10-07 11:31:42.161817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.493 [2024-10-07 11:31:42.164921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.493 [2024-10-07 11:31:42.165031] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.493 [2024-10-07 11:31:42.165063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.493 [2024-10-07 11:31:42.165081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.493 [2024-10-07 11:31:42.165112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.493 [2024-10-07 11:31:42.165144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.493 [2024-10-07 11:31:42.165162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.493 [2024-10-07 11:31:42.165176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.493 [2024-10-07 11:31:42.165206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.493 [2024-10-07 11:31:42.171592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.493 [2024-10-07 11:31:42.171706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.493 [2024-10-07 11:31:42.171738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.493 [2024-10-07 11:31:42.171755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.493 [2024-10-07 11:31:42.171787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.493 [2024-10-07 11:31:42.171819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.493 [2024-10-07 11:31:42.171837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.493 [2024-10-07 11:31:42.171851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.493 [2024-10-07 11:31:42.171881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.493 [2024-10-07 11:31:42.175686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.493 [2024-10-07 11:31:42.175807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.493 [2024-10-07 11:31:42.175838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.493 [2024-10-07 11:31:42.175874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.493 [2024-10-07 11:31:42.175908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.493 [2024-10-07 11:31:42.175940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.493 [2024-10-07 11:31:42.175958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.493 [2024-10-07 11:31:42.175972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.493 [2024-10-07 11:31:42.176003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.493 [2024-10-07 11:31:42.183087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.493 [2024-10-07 11:31:42.183240] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.493 [2024-10-07 11:31:42.183273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.493 [2024-10-07 11:31:42.183290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.493 [2024-10-07 11:31:42.183337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.493 [2024-10-07 11:31:42.183373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.493 [2024-10-07 11:31:42.183392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.493 [2024-10-07 11:31:42.183406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.493 [2024-10-07 11:31:42.183436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.493 [2024-10-07 11:31:42.185780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.493 [2024-10-07 11:31:42.185886] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.493 [2024-10-07 11:31:42.185917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.493 [2024-10-07 11:31:42.185935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.493 [2024-10-07 11:31:42.185977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.493 [2024-10-07 11:31:42.186008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.493 [2024-10-07 11:31:42.186027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.493 [2024-10-07 11:31:42.186041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.493 [2024-10-07 11:31:42.186070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.493 [2024-10-07 11:31:42.193237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.493 [2024-10-07 11:31:42.193362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.493 [2024-10-07 11:31:42.193395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.493 [2024-10-07 11:31:42.193412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.493 [2024-10-07 11:31:42.193450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.493 [2024-10-07 11:31:42.193482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.493 [2024-10-07 11:31:42.193517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.493 [2024-10-07 11:31:42.193532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.493 [2024-10-07 11:31:42.193564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.493 [2024-10-07 11:31:42.197404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.493 [2024-10-07 11:31:42.197523] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.493 [2024-10-07 11:31:42.197555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.493 [2024-10-07 11:31:42.197572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.493 [2024-10-07 11:31:42.197615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.493 [2024-10-07 11:31:42.197649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.493 [2024-10-07 11:31:42.197667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.493 [2024-10-07 11:31:42.197681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.493 [2024-10-07 11:31:42.197711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.493 [2024-10-07 11:31:42.204054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.493 [2024-10-07 11:31:42.204177] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.493 [2024-10-07 11:31:42.204208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.493 [2024-10-07 11:31:42.204226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.493 [2024-10-07 11:31:42.204273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.493 [2024-10-07 11:31:42.204308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.493 [2024-10-07 11:31:42.204344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.493 [2024-10-07 11:31:42.204360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.493 [2024-10-07 11:31:42.204391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.493 [2024-10-07 11:31:42.207509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.493 [2024-10-07 11:31:42.207620] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.493 [2024-10-07 11:31:42.207651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.493 [2024-10-07 11:31:42.207669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.493 [2024-10-07 11:31:42.207700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.493 [2024-10-07 11:31:42.207732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.493 [2024-10-07 11:31:42.207751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.493 [2024-10-07 11:31:42.207765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.493 [2024-10-07 11:31:42.207795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.493 [2024-10-07 11:31:42.214146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.493 [2024-10-07 11:31:42.214259] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.493 [2024-10-07 11:31:42.214303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.493 [2024-10-07 11:31:42.214338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.493 [2024-10-07 11:31:42.214373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.493 [2024-10-07 11:31:42.214406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.493 [2024-10-07 11:31:42.214424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.493 [2024-10-07 11:31:42.214438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.493 [2024-10-07 11:31:42.214467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.493 [2024-10-07 11:31:42.218233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.493 [2024-10-07 11:31:42.218374] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.493 [2024-10-07 11:31:42.218407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.493 [2024-10-07 11:31:42.218424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.493 [2024-10-07 11:31:42.218457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.493 [2024-10-07 11:31:42.218489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.493 [2024-10-07 11:31:42.218508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.493 [2024-10-07 11:31:42.218522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.493 [2024-10-07 11:31:42.218552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.493 [2024-10-07 11:31:42.225626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.493 [2024-10-07 11:31:42.225777] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.493 [2024-10-07 11:31:42.225810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.493 [2024-10-07 11:31:42.225827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.493 [2024-10-07 11:31:42.225860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.493 [2024-10-07 11:31:42.225893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.493 [2024-10-07 11:31:42.225911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.493 [2024-10-07 11:31:42.225928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.493 [2024-10-07 11:31:42.225958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.493 [2024-10-07 11:31:42.228345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.493 [2024-10-07 11:31:42.228455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.493 [2024-10-07 11:31:42.228486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.493 [2024-10-07 11:31:42.228503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.493 [2024-10-07 11:31:42.228553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.493 [2024-10-07 11:31:42.228585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.493 [2024-10-07 11:31:42.228603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.493 [2024-10-07 11:31:42.228617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.493 [2024-10-07 11:31:42.228648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.493 [2024-10-07 11:31:42.235754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.493 [2024-10-07 11:31:42.235873] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.493 [2024-10-07 11:31:42.235905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.493 [2024-10-07 11:31:42.235922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.493 [2024-10-07 11:31:42.235954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.493 [2024-10-07 11:31:42.235987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.493 [2024-10-07 11:31:42.236005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.493 [2024-10-07 11:31:42.236019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.493 [2024-10-07 11:31:42.236049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.493 [2024-10-07 11:31:42.239829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.493 [2024-10-07 11:31:42.239940] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.493 [2024-10-07 11:31:42.239971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.493 [2024-10-07 11:31:42.239988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.493 [2024-10-07 11:31:42.240020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.493 [2024-10-07 11:31:42.240052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.494 [2024-10-07 11:31:42.240071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.494 [2024-10-07 11:31:42.240085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.494 [2024-10-07 11:31:42.240115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.494 [2024-10-07 11:31:42.246492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.494 [2024-10-07 11:31:42.246613] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.494 [2024-10-07 11:31:42.246644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.494 [2024-10-07 11:31:42.246661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.494 [2024-10-07 11:31:42.246693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.494 [2024-10-07 11:31:42.246726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.494 [2024-10-07 11:31:42.246743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.494 [2024-10-07 11:31:42.246775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.494 [2024-10-07 11:31:42.246808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.494 [2024-10-07 11:31:42.249919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.494 [2024-10-07 11:31:42.250030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.494 [2024-10-07 11:31:42.250060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.494 [2024-10-07 11:31:42.250078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.494 [2024-10-07 11:31:42.250109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.494 [2024-10-07 11:31:42.250141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.494 [2024-10-07 11:31:42.250160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.494 [2024-10-07 11:31:42.250174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.494 [2024-10-07 11:31:42.250204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.494 [2024-10-07 11:31:42.256596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.494 [2024-10-07 11:31:42.256706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.494 [2024-10-07 11:31:42.256737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.494 [2024-10-07 11:31:42.256754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.494 [2024-10-07 11:31:42.256786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.494 [2024-10-07 11:31:42.256818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.494 [2024-10-07 11:31:42.256836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.494 [2024-10-07 11:31:42.256850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.494 [2024-10-07 11:31:42.256881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.494 [2024-10-07 11:31:42.260627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.494 [2024-10-07 11:31:42.260763] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.494 [2024-10-07 11:31:42.260795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.494 [2024-10-07 11:31:42.260812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.494 [2024-10-07 11:31:42.260844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.494 [2024-10-07 11:31:42.260876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.494 [2024-10-07 11:31:42.260894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.494 [2024-10-07 11:31:42.260908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.494 [2024-10-07 11:31:42.260937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.494 [2024-10-07 11:31:42.268007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.494 [2024-10-07 11:31:42.268129] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.494 [2024-10-07 11:31:42.268167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.494 [2024-10-07 11:31:42.268185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.494 [2024-10-07 11:31:42.268217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.494 [2024-10-07 11:31:42.268250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.494 [2024-10-07 11:31:42.268268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.494 [2024-10-07 11:31:42.268282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.494 [2024-10-07 11:31:42.268312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.494 [2024-10-07 11:31:42.270716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.494 [2024-10-07 11:31:42.270827] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.494 [2024-10-07 11:31:42.270858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.494 [2024-10-07 11:31:42.270875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.494 [2024-10-07 11:31:42.270907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.494 [2024-10-07 11:31:42.270939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.494 [2024-10-07 11:31:42.270957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.494 [2024-10-07 11:31:42.270971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.494 [2024-10-07 11:31:42.271155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.494 [2024-10-07 11:31:42.278103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.494 [2024-10-07 11:31:42.278215] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.494 [2024-10-07 11:31:42.278247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.494 [2024-10-07 11:31:42.278264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.494 [2024-10-07 11:31:42.278307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.494 [2024-10-07 11:31:42.278359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.494 [2024-10-07 11:31:42.278378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.494 [2024-10-07 11:31:42.278392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.494 [2024-10-07 11:31:42.278421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.494 [2024-10-07 11:31:42.282092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.494 [2024-10-07 11:31:42.282202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.494 [2024-10-07 11:31:42.282233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.494 [2024-10-07 11:31:42.282250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.494 [2024-10-07 11:31:42.282282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.494 [2024-10-07 11:31:42.282362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.494 [2024-10-07 11:31:42.282382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.494 [2024-10-07 11:31:42.282397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.494 [2024-10-07 11:31:42.282427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.494 [2024-10-07 11:31:42.288753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.494 [2024-10-07 11:31:42.288873] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.494 [2024-10-07 11:31:42.288905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.494 [2024-10-07 11:31:42.288923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.494 [2024-10-07 11:31:42.288955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.494 [2024-10-07 11:31:42.288987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.494 [2024-10-07 11:31:42.289005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.494 [2024-10-07 11:31:42.289019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.494 [2024-10-07 11:31:42.289049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.494 [2024-10-07 11:31:42.292177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.494 [2024-10-07 11:31:42.292289] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.494 [2024-10-07 11:31:42.292332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.494 [2024-10-07 11:31:42.292352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.494 [2024-10-07 11:31:42.292384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.494 [2024-10-07 11:31:42.292417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.494 [2024-10-07 11:31:42.292435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.494 [2024-10-07 11:31:42.292449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.494 [2024-10-07 11:31:42.292479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.494 [2024-10-07 11:31:42.298843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.494 [2024-10-07 11:31:42.298954] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.494 [2024-10-07 11:31:42.298985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.494 [2024-10-07 11:31:42.299002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.494 [2024-10-07 11:31:42.299034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.494 [2024-10-07 11:31:42.299066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.494 [2024-10-07 11:31:42.299084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.494 [2024-10-07 11:31:42.299099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.494 [2024-10-07 11:31:42.299301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.494 [2024-10-07 11:31:42.302846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.494 [2024-10-07 11:31:42.302965] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.494 [2024-10-07 11:31:42.302996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.494 [2024-10-07 11:31:42.303014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.494 [2024-10-07 11:31:42.303045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.494 [2024-10-07 11:31:42.303077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.494 [2024-10-07 11:31:42.303095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.494 [2024-10-07 11:31:42.303109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.494 [2024-10-07 11:31:42.303139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.494 [2024-10-07 11:31:42.310229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.494 [2024-10-07 11:31:42.310363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.494 [2024-10-07 11:31:42.310396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.494 [2024-10-07 11:31:42.310413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.494 [2024-10-07 11:31:42.310446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.494 [2024-10-07 11:31:42.310479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.494 [2024-10-07 11:31:42.310496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.494 [2024-10-07 11:31:42.310510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.494 [2024-10-07 11:31:42.310541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.494 [2024-10-07 11:31:42.312955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.494 [2024-10-07 11:31:42.313064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.494 [2024-10-07 11:31:42.313095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.494 [2024-10-07 11:31:42.313112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.494 [2024-10-07 11:31:42.313143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.494 [2024-10-07 11:31:42.313175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.494 [2024-10-07 11:31:42.313193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.494 [2024-10-07 11:31:42.313207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.494 [2024-10-07 11:31:42.313405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.494 [2024-10-07 11:31:42.320329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.494 [2024-10-07 11:31:42.320442] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.494 [2024-10-07 11:31:42.320473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.494 [2024-10-07 11:31:42.320507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.495 [2024-10-07 11:31:42.320541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.495 [2024-10-07 11:31:42.320573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.495 [2024-10-07 11:31:42.320591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.495 [2024-10-07 11:31:42.320605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.495 [2024-10-07 11:31:42.320635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.495 [2024-10-07 11:31:42.324339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.495 [2024-10-07 11:31:42.324461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.495 [2024-10-07 11:31:42.324492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.495 [2024-10-07 11:31:42.324509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.495 [2024-10-07 11:31:42.324541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.495 [2024-10-07 11:31:42.324572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.495 [2024-10-07 11:31:42.324590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.495 [2024-10-07 11:31:42.324604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.495 [2024-10-07 11:31:42.324634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.495 [2024-10-07 11:31:42.330967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.495 [2024-10-07 11:31:42.331088] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.495 [2024-10-07 11:31:42.331119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.495 [2024-10-07 11:31:42.331136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.495 [2024-10-07 11:31:42.331169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.495 [2024-10-07 11:31:42.331200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.495 [2024-10-07 11:31:42.331218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.495 [2024-10-07 11:31:42.331233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.495 [2024-10-07 11:31:42.331263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.495 [2024-10-07 11:31:42.334430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.495 [2024-10-07 11:31:42.334540] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.495 [2024-10-07 11:31:42.334570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.495 [2024-10-07 11:31:42.334588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.495 [2024-10-07 11:31:42.334619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.495 [2024-10-07 11:31:42.334651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.495 [2024-10-07 11:31:42.334685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.495 [2024-10-07 11:31:42.334701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.495 [2024-10-07 11:31:42.334732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.495 [2024-10-07 11:31:42.341055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.495 [2024-10-07 11:31:42.341177] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.495 [2024-10-07 11:31:42.341208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.495 [2024-10-07 11:31:42.341225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.495 [2024-10-07 11:31:42.341256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.495 [2024-10-07 11:31:42.341460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.495 [2024-10-07 11:31:42.341488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.495 [2024-10-07 11:31:42.341503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.495 [2024-10-07 11:31:42.341636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.495 [2024-10-07 11:31:42.344974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.495 [2024-10-07 11:31:42.345093] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.495 [2024-10-07 11:31:42.345124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.495 [2024-10-07 11:31:42.345141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.495 [2024-10-07 11:31:42.345173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.495 [2024-10-07 11:31:42.345205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.495 [2024-10-07 11:31:42.345223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.495 [2024-10-07 11:31:42.345237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.495 [2024-10-07 11:31:42.345267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.495 [2024-10-07 11:31:42.352362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.495 [2024-10-07 11:31:42.352474] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.495 [2024-10-07 11:31:42.352505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.495 [2024-10-07 11:31:42.352523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.495 [2024-10-07 11:31:42.352555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.495 [2024-10-07 11:31:42.352587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.495 [2024-10-07 11:31:42.352605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.495 [2024-10-07 11:31:42.352620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.495 [2024-10-07 11:31:42.352650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.495 [2024-10-07 11:31:42.355066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.495 [2024-10-07 11:31:42.355178] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.495 [2024-10-07 11:31:42.355209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.495 [2024-10-07 11:31:42.355226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.495 [2024-10-07 11:31:42.355441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.495 [2024-10-07 11:31:42.355584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.495 [2024-10-07 11:31:42.355619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.495 [2024-10-07 11:31:42.355637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.495 [2024-10-07 11:31:42.355754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.495 [2024-10-07 11:31:42.362855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.495 [2024-10-07 11:31:42.364154] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.495 [2024-10-07 11:31:42.364201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.495 [2024-10-07 11:31:42.364225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.495 [2024-10-07 11:31:42.364473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.495 [2024-10-07 11:31:42.365288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.495 [2024-10-07 11:31:42.365335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.495 [2024-10-07 11:31:42.365356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.495 [2024-10-07 11:31:42.366614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.495 [2024-10-07 11:31:42.366782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.495 [2024-10-07 11:31:42.366880] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.495 [2024-10-07 11:31:42.366911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.495 [2024-10-07 11:31:42.366928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.495 [2024-10-07 11:31:42.366961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.495 [2024-10-07 11:31:42.366993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.495 [2024-10-07 11:31:42.367013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.495 [2024-10-07 11:31:42.367027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.495 [2024-10-07 11:31:42.367057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.495 [2024-10-07 11:31:42.373139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.495 [2024-10-07 11:31:42.373296] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.495 [2024-10-07 11:31:42.373357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.495 [2024-10-07 11:31:42.373385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.495 [2024-10-07 11:31:42.373456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.495 [2024-10-07 11:31:42.373503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.495 [2024-10-07 11:31:42.373530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.495 [2024-10-07 11:31:42.373554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.495 [2024-10-07 11:31:42.375015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.495 [2024-10-07 11:31:42.376878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.495 [2024-10-07 11:31:42.378496] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.495 [2024-10-07 11:31:42.378561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.495 [2024-10-07 11:31:42.378593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.495 [2024-10-07 11:31:42.378887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.495 [2024-10-07 11:31:42.379896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.495 [2024-10-07 11:31:42.379946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.495 [2024-10-07 11:31:42.379967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.495 [2024-10-07 11:31:42.381166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.495 [2024-10-07 11:31:42.383253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.495 [2024-10-07 11:31:42.383541] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.495 [2024-10-07 11:31:42.383587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.495 [2024-10-07 11:31:42.383607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.495 [2024-10-07 11:31:42.383741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.495 [2024-10-07 11:31:42.383867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.495 [2024-10-07 11:31:42.383902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.495 [2024-10-07 11:31:42.383919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.495 [2024-10-07 11:31:42.383979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.495 [2024-10-07 11:31:42.387002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.495 [2024-10-07 11:31:42.387117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.495 [2024-10-07 11:31:42.387149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.495 [2024-10-07 11:31:42.387167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.495 [2024-10-07 11:31:42.387199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.495 [2024-10-07 11:31:42.387230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.495 [2024-10-07 11:31:42.387249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.495 [2024-10-07 11:31:42.387278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.495 [2024-10-07 11:31:42.387311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.495 [2024-10-07 11:31:42.394366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.495 [2024-10-07 11:31:42.394482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.495 [2024-10-07 11:31:42.394515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.495 [2024-10-07 11:31:42.394532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.495 [2024-10-07 11:31:42.394564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.495 [2024-10-07 11:31:42.394597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.495 [2024-10-07 11:31:42.394615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.495 [2024-10-07 11:31:42.394631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.495 [2024-10-07 11:31:42.394672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.495 [2024-10-07 11:31:42.397094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.495 [2024-10-07 11:31:42.397204] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.495 [2024-10-07 11:31:42.397235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.495 [2024-10-07 11:31:42.397251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.495 [2024-10-07 11:31:42.397455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.495 [2024-10-07 11:31:42.397598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.495 [2024-10-07 11:31:42.397634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.495 [2024-10-07 11:31:42.397652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.495 [2024-10-07 11:31:42.397782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.495 [2024-10-07 11:31:42.404455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.495 [2024-10-07 11:31:42.404568] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.495 [2024-10-07 11:31:42.404599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.495 [2024-10-07 11:31:42.404617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.495 [2024-10-07 11:31:42.404649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.495 [2024-10-07 11:31:42.404681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.495 [2024-10-07 11:31:42.404699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.495 [2024-10-07 11:31:42.404714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.495 [2024-10-07 11:31:42.404743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.495 [2024-10-07 11:31:42.408429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.495 [2024-10-07 11:31:42.408557] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.495 [2024-10-07 11:31:42.408590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.495 [2024-10-07 11:31:42.408607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.495 [2024-10-07 11:31:42.408640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.496 [2024-10-07 11:31:42.408672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.496 [2024-10-07 11:31:42.408690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.496 [2024-10-07 11:31:42.408705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.496 [2024-10-07 11:31:42.408734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.496 [2024-10-07 11:31:42.415075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.496 [2024-10-07 11:31:42.415197] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.496 [2024-10-07 11:31:42.415228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.496 [2024-10-07 11:31:42.415246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.496 [2024-10-07 11:31:42.415278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.496 [2024-10-07 11:31:42.415310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.496 [2024-10-07 11:31:42.415346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.496 [2024-10-07 11:31:42.415362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.496 [2024-10-07 11:31:42.415393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.496 [2024-10-07 11:31:42.418533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.496 [2024-10-07 11:31:42.418646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.496 [2024-10-07 11:31:42.418677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.496 [2024-10-07 11:31:42.418693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.496 [2024-10-07 11:31:42.418725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.496 [2024-10-07 11:31:42.418757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.496 [2024-10-07 11:31:42.418775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.496 [2024-10-07 11:31:42.418790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.496 [2024-10-07 11:31:42.418820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.496 [2024-10-07 11:31:42.425164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.496 [2024-10-07 11:31:42.425279] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.496 [2024-10-07 11:31:42.425310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.496 [2024-10-07 11:31:42.425346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.496 [2024-10-07 11:31:42.425379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.496 [2024-10-07 11:31:42.425431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.496 [2024-10-07 11:31:42.425450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.496 [2024-10-07 11:31:42.425465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.496 [2024-10-07 11:31:42.425649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.496 [2024-10-07 11:31:42.429133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.496 [2024-10-07 11:31:42.429253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.496 [2024-10-07 11:31:42.429285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.496 [2024-10-07 11:31:42.429302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.496 [2024-10-07 11:31:42.429350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.496 [2024-10-07 11:31:42.429385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.496 [2024-10-07 11:31:42.429403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.496 [2024-10-07 11:31:42.429417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.496 [2024-10-07 11:31:42.429446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.496 [2024-10-07 11:31:42.436570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.496 [2024-10-07 11:31:42.436686] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.496 [2024-10-07 11:31:42.436717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.496 [2024-10-07 11:31:42.436734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.496 [2024-10-07 11:31:42.436766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.496 [2024-10-07 11:31:42.436798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.496 [2024-10-07 11:31:42.436816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.496 [2024-10-07 11:31:42.436830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.496 [2024-10-07 11:31:42.436860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.496 [2024-10-07 11:31:42.439225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.496 [2024-10-07 11:31:42.439350] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.496 [2024-10-07 11:31:42.439382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.496 [2024-10-07 11:31:42.439399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.496 [2024-10-07 11:31:42.439431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.496 [2024-10-07 11:31:42.439464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.496 [2024-10-07 11:31:42.439482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.496 [2024-10-07 11:31:42.439496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.496 [2024-10-07 11:31:42.439697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.496 [2024-10-07 11:31:42.446664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.496 [2024-10-07 11:31:42.446783] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.496 [2024-10-07 11:31:42.446815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.496 [2024-10-07 11:31:42.446833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.496 [2024-10-07 11:31:42.446865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.496 [2024-10-07 11:31:42.446898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.496 [2024-10-07 11:31:42.446916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.496 [2024-10-07 11:31:42.446930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.496 [2024-10-07 11:31:42.446960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.496 [2024-10-07 11:31:42.450652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.496 [2024-10-07 11:31:42.450760] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.496 [2024-10-07 11:31:42.450791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.496 [2024-10-07 11:31:42.450808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.496 [2024-10-07 11:31:42.450840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.496 [2024-10-07 11:31:42.450872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.496 [2024-10-07 11:31:42.450891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.496 [2024-10-07 11:31:42.450905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.496 [2024-10-07 11:31:42.450936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.496 [2024-10-07 11:31:42.457223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.496 [2024-10-07 11:31:42.457353] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.496 [2024-10-07 11:31:42.457385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.496 [2024-10-07 11:31:42.457403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.496 [2024-10-07 11:31:42.457452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.496 [2024-10-07 11:31:42.457488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.496 [2024-10-07 11:31:42.457507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.496 [2024-10-07 11:31:42.457521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.496 [2024-10-07 11:31:42.457551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.496 [2024-10-07 11:31:42.460739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.496 [2024-10-07 11:31:42.460852] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.496 [2024-10-07 11:31:42.460882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.496 [2024-10-07 11:31:42.460917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.496 [2024-10-07 11:31:42.460951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.496 [2024-10-07 11:31:42.460984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.496 [2024-10-07 11:31:42.461002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.496 [2024-10-07 11:31:42.461016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.496 [2024-10-07 11:31:42.461046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.496 [2024-10-07 11:31:42.467312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.496 [2024-10-07 11:31:42.467435] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.496 [2024-10-07 11:31:42.467466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.496 [2024-10-07 11:31:42.467484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.496 [2024-10-07 11:31:42.467517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.496 [2024-10-07 11:31:42.467549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.496 [2024-10-07 11:31:42.467567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.496 [2024-10-07 11:31:42.467581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.496 [2024-10-07 11:31:42.467611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.496 [2024-10-07 11:31:42.471407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.496 [2024-10-07 11:31:42.471529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.496 [2024-10-07 11:31:42.471560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.496 [2024-10-07 11:31:42.471578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.496 [2024-10-07 11:31:42.471610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.496 [2024-10-07 11:31:42.471643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.496 [2024-10-07 11:31:42.471661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.496 [2024-10-07 11:31:42.471676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.496 [2024-10-07 11:31:42.471706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.496 [2024-10-07 11:31:42.479258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.496 [2024-10-07 11:31:42.479441] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.496 [2024-10-07 11:31:42.479474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.496 [2024-10-07 11:31:42.479492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.496 [2024-10-07 11:31:42.479525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.496 [2024-10-07 11:31:42.479559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.496 [2024-10-07 11:31:42.479598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.496 [2024-10-07 11:31:42.479614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.496 [2024-10-07 11:31:42.479647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.496 [2024-10-07 11:31:42.481617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.496 [2024-10-07 11:31:42.481728] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.496 [2024-10-07 11:31:42.481759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.496 [2024-10-07 11:31:42.481776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.496 [2024-10-07 11:31:42.481809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.496 [2024-10-07 11:31:42.481842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.496 [2024-10-07 11:31:42.481860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.496 [2024-10-07 11:31:42.481874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.496 [2024-10-07 11:31:42.481904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.496 [2024-10-07 11:31:42.489480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.496 [2024-10-07 11:31:42.489599] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.496 [2024-10-07 11:31:42.489630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.496 [2024-10-07 11:31:42.489648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.496 [2024-10-07 11:31:42.489680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.496 [2024-10-07 11:31:42.489713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.496 [2024-10-07 11:31:42.489731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.496 [2024-10-07 11:31:42.489745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.496 [2024-10-07 11:31:42.489775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.496 [2024-10-07 11:31:42.493612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.496 [2024-10-07 11:31:42.493723] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.496 [2024-10-07 11:31:42.493754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.496 [2024-10-07 11:31:42.493772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.496 [2024-10-07 11:31:42.493805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.496 [2024-10-07 11:31:42.493837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.496 [2024-10-07 11:31:42.493855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.496 [2024-10-07 11:31:42.493870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.496 [2024-10-07 11:31:42.493900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.496 8696.33 IOPS, 33.97 MiB/s [2024-10-07T11:31:53.019Z] [2024-10-07 11:31:42.500825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.496 [2024-10-07 11:31:42.502124] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.496 [2024-10-07 11:31:42.502168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.496 [2024-10-07 11:31:42.502189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.496 [2024-10-07 11:31:42.502423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.496 [2024-10-07 11:31:42.502475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.497 [2024-10-07 11:31:42.502495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.497 [2024-10-07 11:31:42.502511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.497 [2024-10-07 11:31:42.502543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.497 [2024-10-07 11:31:42.503703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.497 [2024-10-07 11:31:42.503812] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.497 [2024-10-07 11:31:42.503843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.497 [2024-10-07 11:31:42.503860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.497 [2024-10-07 11:31:42.503892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.497 [2024-10-07 11:31:42.503924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.497 [2024-10-07 11:31:42.503942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.497 [2024-10-07 11:31:42.503956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.497 [2024-10-07 11:31:42.503986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.497 [2024-10-07 11:31:42.511185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.497 [2024-10-07 11:31:42.511339] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.497 [2024-10-07 11:31:42.511371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.497 [2024-10-07 11:31:42.511389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.497 [2024-10-07 11:31:42.511422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.497 [2024-10-07 11:31:42.511455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.497 [2024-10-07 11:31:42.511473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.497 [2024-10-07 11:31:42.511487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.497 [2024-10-07 11:31:42.511517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.497 [2024-10-07 11:31:42.514574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.497 [2024-10-07 11:31:42.514693] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.497 [2024-10-07 11:31:42.514723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.497 [2024-10-07 11:31:42.514759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.497 [2024-10-07 11:31:42.514795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.497 [2024-10-07 11:31:42.514828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.497 [2024-10-07 11:31:42.514846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.497 [2024-10-07 11:31:42.514861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.497 [2024-10-07 11:31:42.514891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.497 [2024-10-07 11:31:42.521968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.497 [2024-10-07 11:31:42.522080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.497 [2024-10-07 11:31:42.522112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.497 [2024-10-07 11:31:42.522130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.497 [2024-10-07 11:31:42.522162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.497 [2024-10-07 11:31:42.522195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.497 [2024-10-07 11:31:42.522213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.497 [2024-10-07 11:31:42.522227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.497 [2024-10-07 11:31:42.522257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.497 [2024-10-07 11:31:42.524667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.497 [2024-10-07 11:31:42.524774] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.497 [2024-10-07 11:31:42.524805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.497 [2024-10-07 11:31:42.524823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.497 [2024-10-07 11:31:42.524855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.497 [2024-10-07 11:31:42.524887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.497 [2024-10-07 11:31:42.524905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.497 [2024-10-07 11:31:42.524919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.497 [2024-10-07 11:31:42.525103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.497 [2024-10-07 11:31:42.532057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.497 [2024-10-07 11:31:42.532168] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.497 [2024-10-07 11:31:42.532199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.497 [2024-10-07 11:31:42.532216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.497 [2024-10-07 11:31:42.532248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.497 [2024-10-07 11:31:42.532280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.497 [2024-10-07 11:31:42.532298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.497 [2024-10-07 11:31:42.532347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.497 [2024-10-07 11:31:42.532381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.497 [2024-10-07 11:31:42.536084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.497 [2024-10-07 11:31:42.536233] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.497 [2024-10-07 11:31:42.536265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.497 [2024-10-07 11:31:42.536283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.497 [2024-10-07 11:31:42.536329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.497 [2024-10-07 11:31:42.536366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.497 [2024-10-07 11:31:42.536385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.497 [2024-10-07 11:31:42.536400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.497 [2024-10-07 11:31:42.536431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.497 [2024-10-07 11:31:42.542847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.497 [2024-10-07 11:31:42.542968] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.497 [2024-10-07 11:31:42.542999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.497 [2024-10-07 11:31:42.543017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.497 [2024-10-07 11:31:42.543049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.497 [2024-10-07 11:31:42.543080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.497 [2024-10-07 11:31:42.543098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.497 [2024-10-07 11:31:42.543112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.497 [2024-10-07 11:31:42.543142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.497 [2024-10-07 11:31:42.546208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.497 [2024-10-07 11:31:42.546352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.497 [2024-10-07 11:31:42.546385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.497 [2024-10-07 11:31:42.546403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.497 [2024-10-07 11:31:42.546436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.497 [2024-10-07 11:31:42.546468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.497 [2024-10-07 11:31:42.546486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.497 [2024-10-07 11:31:42.546501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.497 [2024-10-07 11:31:42.546531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.497 [2024-10-07 11:31:42.552936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.497 [2024-10-07 11:31:42.553068] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.497 [2024-10-07 11:31:42.553099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.497 [2024-10-07 11:31:42.553117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.497 [2024-10-07 11:31:42.553164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.497 [2024-10-07 11:31:42.553199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.497 [2024-10-07 11:31:42.553217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.497 [2024-10-07 11:31:42.553231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.497 [2024-10-07 11:31:42.553262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.497 [2024-10-07 11:31:42.556984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.497 [2024-10-07 11:31:42.557104] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.497 [2024-10-07 11:31:42.557136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.497 [2024-10-07 11:31:42.557153] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.497 [2024-10-07 11:31:42.557200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.497 [2024-10-07 11:31:42.557235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.497 [2024-10-07 11:31:42.557254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.497 [2024-10-07 11:31:42.557268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.497 [2024-10-07 11:31:42.557297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.497 [2024-10-07 11:31:42.564449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.497 [2024-10-07 11:31:42.564562] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.497 [2024-10-07 11:31:42.564594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.497 [2024-10-07 11:31:42.564611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.497 [2024-10-07 11:31:42.564644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.497 [2024-10-07 11:31:42.564680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.497 [2024-10-07 11:31:42.564698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.497 [2024-10-07 11:31:42.564712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.497 [2024-10-07 11:31:42.564743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.497 [2024-10-07 11:31:42.567071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.497 [2024-10-07 11:31:42.567182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.497 [2024-10-07 11:31:42.567213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.497 [2024-10-07 11:31:42.567230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.497 [2024-10-07 11:31:42.567278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.497 [2024-10-07 11:31:42.567312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.497 [2024-10-07 11:31:42.567346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.497 [2024-10-07 11:31:42.567361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.497 [2024-10-07 11:31:42.567392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.497 [2024-10-07 11:31:42.574543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.497 [2024-10-07 11:31:42.574655] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.497 [2024-10-07 11:31:42.574687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.497 [2024-10-07 11:31:42.574704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.497 [2024-10-07 11:31:42.574735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.497 [2024-10-07 11:31:42.574767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.497 [2024-10-07 11:31:42.574785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.497 [2024-10-07 11:31:42.574806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.497 [2024-10-07 11:31:42.574836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.497 [2024-10-07 11:31:42.578582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.497 [2024-10-07 11:31:42.578696] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.497 [2024-10-07 11:31:42.578726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.497 [2024-10-07 11:31:42.578744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.497 [2024-10-07 11:31:42.578775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.497 [2024-10-07 11:31:42.578808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.497 [2024-10-07 11:31:42.578826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.497 [2024-10-07 11:31:42.578841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.497 [2024-10-07 11:31:42.578870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.497 [2024-10-07 11:31:42.585209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.497 [2024-10-07 11:31:42.585345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.497 [2024-10-07 11:31:42.585377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.498 [2024-10-07 11:31:42.585394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.498 [2024-10-07 11:31:42.585427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.498 [2024-10-07 11:31:42.585460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.498 [2024-10-07 11:31:42.585478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.498 [2024-10-07 11:31:42.585493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.498 [2024-10-07 11:31:42.585540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.498 [2024-10-07 11:31:42.588674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.498 [2024-10-07 11:31:42.588784] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.498 [2024-10-07 11:31:42.588815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.498 [2024-10-07 11:31:42.588832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.498 [2024-10-07 11:31:42.588864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.498 [2024-10-07 11:31:42.588896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.498 [2024-10-07 11:31:42.588914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.498 [2024-10-07 11:31:42.588929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.498 [2024-10-07 11:31:42.588958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.498 [2024-10-07 11:31:42.595302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.498 [2024-10-07 11:31:42.595427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.498 [2024-10-07 11:31:42.595458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.498 [2024-10-07 11:31:42.595475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.498 [2024-10-07 11:31:42.595507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.498 [2024-10-07 11:31:42.595539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.498 [2024-10-07 11:31:42.595556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.498 [2024-10-07 11:31:42.595571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.498 [2024-10-07 11:31:42.595755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.498 [2024-10-07 11:31:42.599276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.498 [2024-10-07 11:31:42.599416] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.498 [2024-10-07 11:31:42.599449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.498 [2024-10-07 11:31:42.599466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.498 [2024-10-07 11:31:42.599499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.498 [2024-10-07 11:31:42.599533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.498 [2024-10-07 11:31:42.599552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.498 [2024-10-07 11:31:42.599567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.498 [2024-10-07 11:31:42.599597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.498 [2024-10-07 11:31:42.606708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.498 [2024-10-07 11:31:42.606823] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.498 [2024-10-07 11:31:42.606870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.498 [2024-10-07 11:31:42.606889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.498 [2024-10-07 11:31:42.606921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.498 [2024-10-07 11:31:42.606954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.498 [2024-10-07 11:31:42.606972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.498 [2024-10-07 11:31:42.606986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.498 [2024-10-07 11:31:42.607016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.498 [2024-10-07 11:31:42.609392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.498 [2024-10-07 11:31:42.609499] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.498 [2024-10-07 11:31:42.609529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.498 [2024-10-07 11:31:42.609546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.498 [2024-10-07 11:31:42.609578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.498 [2024-10-07 11:31:42.609610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.498 [2024-10-07 11:31:42.609628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.498 [2024-10-07 11:31:42.609643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.498 [2024-10-07 11:31:42.609827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.498 [2024-10-07 11:31:42.616802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.498 [2024-10-07 11:31:42.616914] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.498 [2024-10-07 11:31:42.616945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.498 [2024-10-07 11:31:42.616962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.498 [2024-10-07 11:31:42.616994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.498 [2024-10-07 11:31:42.617026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.498 [2024-10-07 11:31:42.617044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.498 [2024-10-07 11:31:42.617059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.498 [2024-10-07 11:31:42.617088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.498 [2024-10-07 11:31:42.620760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.498 [2024-10-07 11:31:42.620872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.498 [2024-10-07 11:31:42.620903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.498 [2024-10-07 11:31:42.620920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.498 [2024-10-07 11:31:42.620953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.498 [2024-10-07 11:31:42.621002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.498 [2024-10-07 11:31:42.621021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.498 [2024-10-07 11:31:42.621036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.498 [2024-10-07 11:31:42.621066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.498 [2024-10-07 11:31:42.627382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.498 [2024-10-07 11:31:42.627503] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.498 [2024-10-07 11:31:42.627534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.498 [2024-10-07 11:31:42.627552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.498 [2024-10-07 11:31:42.627584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.498 [2024-10-07 11:31:42.627617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.498 [2024-10-07 11:31:42.627634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.498 [2024-10-07 11:31:42.627648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.498 [2024-10-07 11:31:42.627678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.498 [2024-10-07 11:31:42.630852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.498 [2024-10-07 11:31:42.630964] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.498 [2024-10-07 11:31:42.630995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.498 [2024-10-07 11:31:42.631012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.498 [2024-10-07 11:31:42.631045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.498 [2024-10-07 11:31:42.631076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.498 [2024-10-07 11:31:42.631094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.498 [2024-10-07 11:31:42.631109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.498 [2024-10-07 11:31:42.631138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.498 [2024-10-07 11:31:42.637473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.498 [2024-10-07 11:31:42.637585] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.498 [2024-10-07 11:31:42.637616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.498 [2024-10-07 11:31:42.637634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.498 [2024-10-07 11:31:42.637665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.498 [2024-10-07 11:31:42.637856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.498 [2024-10-07 11:31:42.637881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.498 [2024-10-07 11:31:42.637896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.498 [2024-10-07 11:31:42.638028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.498 [2024-10-07 11:31:42.641382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.498 [2024-10-07 11:31:42.641502] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.498 [2024-10-07 11:31:42.641533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.498 [2024-10-07 11:31:42.641551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.498 [2024-10-07 11:31:42.641584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.498 [2024-10-07 11:31:42.641618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.498 [2024-10-07 11:31:42.641636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.498 [2024-10-07 11:31:42.641650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.498 [2024-10-07 11:31:42.641679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.498 [2024-10-07 11:31:42.648784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.498 [2024-10-07 11:31:42.648904] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.498 [2024-10-07 11:31:42.648936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.498 [2024-10-07 11:31:42.648953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.498 [2024-10-07 11:31:42.648985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.498 [2024-10-07 11:31:42.649017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.498 [2024-10-07 11:31:42.649035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.498 [2024-10-07 11:31:42.649049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.498 [2024-10-07 11:31:42.649080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.498 [2024-10-07 11:31:42.651474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.498 [2024-10-07 11:31:42.651581] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.498 [2024-10-07 11:31:42.651611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.498 [2024-10-07 11:31:42.651629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.498 [2024-10-07 11:31:42.651660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.498 [2024-10-07 11:31:42.651692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.498 [2024-10-07 11:31:42.651710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.498 [2024-10-07 11:31:42.651725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.498 [2024-10-07 11:31:42.651908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.498 [2024-10-07 11:31:42.658875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.498 [2024-10-07 11:31:42.658987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.498 [2024-10-07 11:31:42.659018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.498 [2024-10-07 11:31:42.659052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.498 [2024-10-07 11:31:42.659086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.498 [2024-10-07 11:31:42.659118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.498 [2024-10-07 11:31:42.659136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.498 [2024-10-07 11:31:42.659150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.498 [2024-10-07 11:31:42.659180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.498 [2024-10-07 11:31:42.662820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.498 [2024-10-07 11:31:42.662933] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.498 [2024-10-07 11:31:42.662964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.498 [2024-10-07 11:31:42.662981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.498 [2024-10-07 11:31:42.663013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.498 [2024-10-07 11:31:42.663046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.498 [2024-10-07 11:31:42.663065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.498 [2024-10-07 11:31:42.663079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.498 [2024-10-07 11:31:42.663109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.498 [2024-10-07 11:31:42.669424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.498 [2024-10-07 11:31:42.669544] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.498 [2024-10-07 11:31:42.669576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.498 [2024-10-07 11:31:42.669594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.498 [2024-10-07 11:31:42.669641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.498 [2024-10-07 11:31:42.669677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.498 [2024-10-07 11:31:42.669695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.498 [2024-10-07 11:31:42.669709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.498 [2024-10-07 11:31:42.669739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.498 [2024-10-07 11:31:42.672910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.498 [2024-10-07 11:31:42.673020] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.498 [2024-10-07 11:31:42.673052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.498 [2024-10-07 11:31:42.673069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.498 [2024-10-07 11:31:42.673100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.499 [2024-10-07 11:31:42.673133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.499 [2024-10-07 11:31:42.673151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.499 [2024-10-07 11:31:42.673182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.499 [2024-10-07 11:31:42.673213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.499 [2024-10-07 11:31:42.679512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.499 [2024-10-07 11:31:42.679624] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.499 [2024-10-07 11:31:42.679655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.499 [2024-10-07 11:31:42.679673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.499 [2024-10-07 11:31:42.679861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.499 [2024-10-07 11:31:42.680003] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.499 [2024-10-07 11:31:42.680028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.499 [2024-10-07 11:31:42.680043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.499 [2024-10-07 11:31:42.680159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.499 [2024-10-07 11:31:42.683425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.499 [2024-10-07 11:31:42.683543] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.499 [2024-10-07 11:31:42.683574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.499 [2024-10-07 11:31:42.683591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.499 [2024-10-07 11:31:42.683623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.499 [2024-10-07 11:31:42.683656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.499 [2024-10-07 11:31:42.683674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.499 [2024-10-07 11:31:42.683688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.499 [2024-10-07 11:31:42.683718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.499 [2024-10-07 11:31:42.690766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.499 [2024-10-07 11:31:42.690879] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.499 [2024-10-07 11:31:42.690911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.499 [2024-10-07 11:31:42.690928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.499 [2024-10-07 11:31:42.690960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.499 [2024-10-07 11:31:42.690993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.499 [2024-10-07 11:31:42.691011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.499 [2024-10-07 11:31:42.691025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.499 [2024-10-07 11:31:42.691055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.499 [2024-10-07 11:31:42.693512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.499 [2024-10-07 11:31:42.693635] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.499 [2024-10-07 11:31:42.693667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.499 [2024-10-07 11:31:42.693684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.499 [2024-10-07 11:31:42.693870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.499 [2024-10-07 11:31:42.694012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.499 [2024-10-07 11:31:42.694051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.499 [2024-10-07 11:31:42.694069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.499 [2024-10-07 11:31:42.694186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.499 [2024-10-07 11:31:42.700859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.499 [2024-10-07 11:31:42.700972] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.499 [2024-10-07 11:31:42.701003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.499 [2024-10-07 11:31:42.701020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.499 [2024-10-07 11:31:42.701052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.499 [2024-10-07 11:31:42.701085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.499 [2024-10-07 11:31:42.701102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.499 [2024-10-07 11:31:42.701117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.499 [2024-10-07 11:31:42.701147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.499 [2024-10-07 11:31:42.704701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.499 [2024-10-07 11:31:42.704813] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.499 [2024-10-07 11:31:42.704844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.499 [2024-10-07 11:31:42.704861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.499 [2024-10-07 11:31:42.704894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.499 [2024-10-07 11:31:42.704926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.499 [2024-10-07 11:31:42.704943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.499 [2024-10-07 11:31:42.704963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.499 [2024-10-07 11:31:42.704992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.499 [2024-10-07 11:31:42.711293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.499 [2024-10-07 11:31:42.711429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.499 [2024-10-07 11:31:42.711462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.499 [2024-10-07 11:31:42.711479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.499 [2024-10-07 11:31:42.711529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.499 [2024-10-07 11:31:42.711563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.499 [2024-10-07 11:31:42.711581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.499 [2024-10-07 11:31:42.711596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.499 [2024-10-07 11:31:42.711626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.499 [2024-10-07 11:31:42.714793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.499 [2024-10-07 11:31:42.714904] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.499 [2024-10-07 11:31:42.714935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.499 [2024-10-07 11:31:42.714953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.499 [2024-10-07 11:31:42.714985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.499 [2024-10-07 11:31:42.715016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.499 [2024-10-07 11:31:42.715034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.499 [2024-10-07 11:31:42.715048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.499 [2024-10-07 11:31:42.715078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.499 [2024-10-07 11:31:42.721397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.499 [2024-10-07 11:31:42.721509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.499 [2024-10-07 11:31:42.721540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.499 [2024-10-07 11:31:42.721559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.499 [2024-10-07 11:31:42.721746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.499 [2024-10-07 11:31:42.721887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.499 [2024-10-07 11:31:42.721912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.499 [2024-10-07 11:31:42.721926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.499 [2024-10-07 11:31:42.722042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.499 [2024-10-07 11:31:42.725227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.499 [2024-10-07 11:31:42.725358] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.499 [2024-10-07 11:31:42.725390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.499 [2024-10-07 11:31:42.725408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.499 [2024-10-07 11:31:42.725440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.499 [2024-10-07 11:31:42.725489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.499 [2024-10-07 11:31:42.725511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.499 [2024-10-07 11:31:42.725540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.499 [2024-10-07 11:31:42.725573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.499 [2024-10-07 11:31:42.732617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.499 [2024-10-07 11:31:42.732732] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.499 [2024-10-07 11:31:42.732764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.499 [2024-10-07 11:31:42.732782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.499 [2024-10-07 11:31:42.732814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.499 [2024-10-07 11:31:42.732847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.499 [2024-10-07 11:31:42.732865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.499 [2024-10-07 11:31:42.732880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.499 [2024-10-07 11:31:42.732909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.499 [2024-10-07 11:31:42.735328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.499 [2024-10-07 11:31:42.735438] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.499 [2024-10-07 11:31:42.735468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.499 [2024-10-07 11:31:42.735486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.499 [2024-10-07 11:31:42.735673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.499 [2024-10-07 11:31:42.735814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.499 [2024-10-07 11:31:42.735859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.499 [2024-10-07 11:31:42.735877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.499 [2024-10-07 11:31:42.735996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.499 [2024-10-07 11:31:42.742711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.499 [2024-10-07 11:31:42.742823] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.499 [2024-10-07 11:31:42.742854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.499 [2024-10-07 11:31:42.742871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.499 [2024-10-07 11:31:42.742903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.499 [2024-10-07 11:31:42.742935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.499 [2024-10-07 11:31:42.742952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.499 [2024-10-07 11:31:42.742967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.499 [2024-10-07 11:31:42.743013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.499 [2024-10-07 11:31:42.746561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.499 [2024-10-07 11:31:42.746674] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.499 [2024-10-07 11:31:42.746722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.499 [2024-10-07 11:31:42.746741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.499 [2024-10-07 11:31:42.746774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.499 [2024-10-07 11:31:42.746807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.499 [2024-10-07 11:31:42.746825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.499 [2024-10-07 11:31:42.746845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.499 [2024-10-07 11:31:42.746875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.499 [2024-10-07 11:31:42.753147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.499 [2024-10-07 11:31:42.753269] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.499 [2024-10-07 11:31:42.753301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.499 [2024-10-07 11:31:42.753331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.499 [2024-10-07 11:31:42.753367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.499 [2024-10-07 11:31:42.753399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.499 [2024-10-07 11:31:42.753417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.499 [2024-10-07 11:31:42.753431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.499 [2024-10-07 11:31:42.753461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.499 [2024-10-07 11:31:42.756649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.499 [2024-10-07 11:31:42.756767] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.499 [2024-10-07 11:31:42.756799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.499 [2024-10-07 11:31:42.756816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.499 [2024-10-07 11:31:42.756848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.499 [2024-10-07 11:31:42.756881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.499 [2024-10-07 11:31:42.756900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.499 [2024-10-07 11:31:42.756914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.499 [2024-10-07 11:31:42.756944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.499 [2024-10-07 11:31:42.763240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.499 [2024-10-07 11:31:42.763367] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.499 [2024-10-07 11:31:42.763399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.499 [2024-10-07 11:31:42.763417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.499 [2024-10-07 11:31:42.763450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.499 [2024-10-07 11:31:42.763501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.500 [2024-10-07 11:31:42.763521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.500 [2024-10-07 11:31:42.763535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.500 [2024-10-07 11:31:42.763565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.500 [2024-10-07 11:31:42.767336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.500 [2024-10-07 11:31:42.767457] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.500 [2024-10-07 11:31:42.767487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.500 [2024-10-07 11:31:42.767505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.500 [2024-10-07 11:31:42.767537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.500 [2024-10-07 11:31:42.767569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.500 [2024-10-07 11:31:42.767587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.500 [2024-10-07 11:31:42.767601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.500 [2024-10-07 11:31:42.767631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.500 [2024-10-07 11:31:42.774677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.500 [2024-10-07 11:31:42.774793] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.500 [2024-10-07 11:31:42.774825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.500 [2024-10-07 11:31:42.774842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.500 [2024-10-07 11:31:42.774874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.500 [2024-10-07 11:31:42.774906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.500 [2024-10-07 11:31:42.774924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.500 [2024-10-07 11:31:42.774938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.500 [2024-10-07 11:31:42.774969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.500 [2024-10-07 11:31:42.777426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.500 [2024-10-07 11:31:42.777535] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.500 [2024-10-07 11:31:42.777566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.500 [2024-10-07 11:31:42.777583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.500 [2024-10-07 11:31:42.777770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.500 [2024-10-07 11:31:42.777912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.500 [2024-10-07 11:31:42.777948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.500 [2024-10-07 11:31:42.777966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.500 [2024-10-07 11:31:42.778084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.500 [2024-10-07 11:31:42.784771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.500 [2024-10-07 11:31:42.784885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.500 [2024-10-07 11:31:42.784917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.500 [2024-10-07 11:31:42.784934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.500 [2024-10-07 11:31:42.784966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.500 [2024-10-07 11:31:42.784998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.500 [2024-10-07 11:31:42.785016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.500 [2024-10-07 11:31:42.785030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.500 [2024-10-07 11:31:42.785060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.500 [2024-10-07 11:31:42.788696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.500 [2024-10-07 11:31:42.788809] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.500 [2024-10-07 11:31:42.788841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.500 [2024-10-07 11:31:42.788858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.500 [2024-10-07 11:31:42.788890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.500 [2024-10-07 11:31:42.788922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.500 [2024-10-07 11:31:42.788940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.500 [2024-10-07 11:31:42.788954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.500 [2024-10-07 11:31:42.788984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.500 [2024-10-07 11:31:42.795307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.500 [2024-10-07 11:31:42.795444] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.500 [2024-10-07 11:31:42.795475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.500 [2024-10-07 11:31:42.795492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.500 [2024-10-07 11:31:42.795525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.500 [2024-10-07 11:31:42.795557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.500 [2024-10-07 11:31:42.795575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.500 [2024-10-07 11:31:42.795589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.500 [2024-10-07 11:31:42.795619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.500 [2024-10-07 11:31:42.798788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.500 [2024-10-07 11:31:42.798898] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.500 [2024-10-07 11:31:42.798929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.500 [2024-10-07 11:31:42.798962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.500 [2024-10-07 11:31:42.798996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.500 [2024-10-07 11:31:42.799029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.500 [2024-10-07 11:31:42.799047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.500 [2024-10-07 11:31:42.799061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.500 [2024-10-07 11:31:42.799090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.500 [2024-10-07 11:31:42.805416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.500 [2024-10-07 11:31:42.805531] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.500 [2024-10-07 11:31:42.805562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.500 [2024-10-07 11:31:42.805579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.500 [2024-10-07 11:31:42.805611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.500 [2024-10-07 11:31:42.805798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.500 [2024-10-07 11:31:42.805825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.500 [2024-10-07 11:31:42.805840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.500 [2024-10-07 11:31:42.805972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.500 [2024-10-07 11:31:42.809340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.500 [2024-10-07 11:31:42.809461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.500 [2024-10-07 11:31:42.809492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.500 [2024-10-07 11:31:42.809519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.500 [2024-10-07 11:31:42.809551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.500 [2024-10-07 11:31:42.809584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.500 [2024-10-07 11:31:42.809602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.500 [2024-10-07 11:31:42.809616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.500 [2024-10-07 11:31:42.809646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.500 [2024-10-07 11:31:42.816808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.500 [2024-10-07 11:31:42.816923] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.500 [2024-10-07 11:31:42.816955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.500 [2024-10-07 11:31:42.816973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.500 [2024-10-07 11:31:42.817005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.500 [2024-10-07 11:31:42.817037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.500 [2024-10-07 11:31:42.817062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.500 [2024-10-07 11:31:42.817086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.500 [2024-10-07 11:31:42.817118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.500 [2024-10-07 11:31:42.819430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.500 [2024-10-07 11:31:42.819540] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.500 [2024-10-07 11:31:42.819577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.500 [2024-10-07 11:31:42.819595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.500 [2024-10-07 11:31:42.819627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.500 [2024-10-07 11:31:42.819659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.500 [2024-10-07 11:31:42.819677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.500 [2024-10-07 11:31:42.819692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.500 [2024-10-07 11:31:42.819721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.500 [2024-10-07 11:31:42.826904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.500 [2024-10-07 11:31:42.827018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.500 [2024-10-07 11:31:42.827055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.500 [2024-10-07 11:31:42.827074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.500 [2024-10-07 11:31:42.827107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.500 [2024-10-07 11:31:42.827139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.500 [2024-10-07 11:31:42.827156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.500 [2024-10-07 11:31:42.827171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.500 [2024-10-07 11:31:42.827200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.500 [2024-10-07 11:31:42.830926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.500 [2024-10-07 11:31:42.831075] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.500 [2024-10-07 11:31:42.831117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.500 [2024-10-07 11:31:42.831137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.500 [2024-10-07 11:31:42.831170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.500 [2024-10-07 11:31:42.831203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.500 [2024-10-07 11:31:42.831221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.500 [2024-10-07 11:31:42.831235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.501 [2024-10-07 11:31:42.831266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.501 [2024-10-07 11:31:42.837710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.501 [2024-10-07 11:31:42.837851] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.501 [2024-10-07 11:31:42.837896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.501 [2024-10-07 11:31:42.837915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.501 [2024-10-07 11:31:42.837948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.501 [2024-10-07 11:31:42.837980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.501 [2024-10-07 11:31:42.837998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.501 [2024-10-07 11:31:42.838012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.501 [2024-10-07 11:31:42.838042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.501 [2024-10-07 11:31:42.841026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.501 [2024-10-07 11:31:42.841135] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.501 [2024-10-07 11:31:42.841168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.501 [2024-10-07 11:31:42.841185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.501 [2024-10-07 11:31:42.841217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.501 [2024-10-07 11:31:42.841249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.501 [2024-10-07 11:31:42.841267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.501 [2024-10-07 11:31:42.841281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.501 [2024-10-07 11:31:42.841311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.501 [2024-10-07 11:31:42.847817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.501 [2024-10-07 11:31:42.847931] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.501 [2024-10-07 11:31:42.847962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.501 [2024-10-07 11:31:42.847979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.501 [2024-10-07 11:31:42.848011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.501 [2024-10-07 11:31:42.848043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.501 [2024-10-07 11:31:42.848061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.501 [2024-10-07 11:31:42.848075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.501 [2024-10-07 11:31:42.848263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.501 [2024-10-07 11:31:42.851850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.501 [2024-10-07 11:31:42.851973] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.501 [2024-10-07 11:31:42.852004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.501 [2024-10-07 11:31:42.852021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.501 [2024-10-07 11:31:42.852072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.501 [2024-10-07 11:31:42.852105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.501 [2024-10-07 11:31:42.852123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.501 [2024-10-07 11:31:42.852137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.501 [2024-10-07 11:31:42.852168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.501 [2024-10-07 11:31:42.859245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.501 [2024-10-07 11:31:42.859427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.501 [2024-10-07 11:31:42.859470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.501 [2024-10-07 11:31:42.859490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.501 [2024-10-07 11:31:42.859523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.501 [2024-10-07 11:31:42.859556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.501 [2024-10-07 11:31:42.859574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.501 [2024-10-07 11:31:42.859588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.501 [2024-10-07 11:31:42.859619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.501 [2024-10-07 11:31:42.861938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.501 [2024-10-07 11:31:42.862046] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.501 [2024-10-07 11:31:42.862076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.501 [2024-10-07 11:31:42.862093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.501 [2024-10-07 11:31:42.862124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.501 [2024-10-07 11:31:42.862157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.501 [2024-10-07 11:31:42.862175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.501 [2024-10-07 11:31:42.862189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.501 [2024-10-07 11:31:42.862219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.501 [2024-10-07 11:31:42.869425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.501 [2024-10-07 11:31:42.869547] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.501 [2024-10-07 11:31:42.869579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.501 [2024-10-07 11:31:42.869596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.501 [2024-10-07 11:31:42.869628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.501 [2024-10-07 11:31:42.869660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.501 [2024-10-07 11:31:42.869678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.501 [2024-10-07 11:31:42.869708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.501 [2024-10-07 11:31:42.869742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.501 [2024-10-07 11:31:42.873549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.501 [2024-10-07 11:31:42.873664] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.501 [2024-10-07 11:31:42.873694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.501 [2024-10-07 11:31:42.873711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.501 [2024-10-07 11:31:42.873743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.501 [2024-10-07 11:31:42.873775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.501 [2024-10-07 11:31:42.873794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.501 [2024-10-07 11:31:42.873808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.501 [2024-10-07 11:31:42.873838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.501 [2024-10-07 11:31:42.880233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.501 [2024-10-07 11:31:42.880388] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.501 [2024-10-07 11:31:42.880420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.501 [2024-10-07 11:31:42.880438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.501 [2024-10-07 11:31:42.880470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.501 [2024-10-07 11:31:42.880503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.501 [2024-10-07 11:31:42.880523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.501 [2024-10-07 11:31:42.880537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.501 [2024-10-07 11:31:42.880568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.501 [2024-10-07 11:31:42.883642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.501 [2024-10-07 11:31:42.883753] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.501 [2024-10-07 11:31:42.883784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.501 [2024-10-07 11:31:42.883801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.501 [2024-10-07 11:31:42.883833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.501 [2024-10-07 11:31:42.883865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.501 [2024-10-07 11:31:42.883883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.501 [2024-10-07 11:31:42.883897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.501 [2024-10-07 11:31:42.883928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.501 [2024-10-07 11:31:42.890353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.501 [2024-10-07 11:31:42.890466] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.501 [2024-10-07 11:31:42.890513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.501 [2024-10-07 11:31:42.890532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.501 [2024-10-07 11:31:42.890565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.501 [2024-10-07 11:31:42.890598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.501 [2024-10-07 11:31:42.890616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.501 [2024-10-07 11:31:42.890630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.501 [2024-10-07 11:31:42.890816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.501 [2024-10-07 11:31:42.894347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.501 [2024-10-07 11:31:42.894470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.501 [2024-10-07 11:31:42.894502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.501 [2024-10-07 11:31:42.894520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.501 [2024-10-07 11:31:42.894552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.501 [2024-10-07 11:31:42.894584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.501 [2024-10-07 11:31:42.894603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.501 [2024-10-07 11:31:42.894617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.501 [2024-10-07 11:31:42.894647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.501 [2024-10-07 11:31:42.901732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.501 [2024-10-07 11:31:42.901848] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.501 [2024-10-07 11:31:42.901880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.501 [2024-10-07 11:31:42.901897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.501 [2024-10-07 11:31:42.901929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.501 [2024-10-07 11:31:42.901961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.501 [2024-10-07 11:31:42.901979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.501 [2024-10-07 11:31:42.901994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.501 [2024-10-07 11:31:42.902024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.501 [2024-10-07 11:31:42.904434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.501 [2024-10-07 11:31:42.904543] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.501 [2024-10-07 11:31:42.904574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.501 [2024-10-07 11:31:42.904591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.501 [2024-10-07 11:31:42.904623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.501 [2024-10-07 11:31:42.904670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.501 [2024-10-07 11:31:42.904689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.501 [2024-10-07 11:31:42.904703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.501 [2024-10-07 11:31:42.904888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.501 [2024-10-07 11:31:42.911821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.501 [2024-10-07 11:31:42.911935] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.501 [2024-10-07 11:31:42.911967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.501 [2024-10-07 11:31:42.911985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.501 [2024-10-07 11:31:42.912016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.501 [2024-10-07 11:31:42.912048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.501 [2024-10-07 11:31:42.912066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.501 [2024-10-07 11:31:42.912081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.501 [2024-10-07 11:31:42.912111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.501 [2024-10-07 11:31:42.915822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.501 [2024-10-07 11:31:42.915936] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.501 [2024-10-07 11:31:42.915967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.501 [2024-10-07 11:31:42.915984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.501 [2024-10-07 11:31:42.916016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.501 [2024-10-07 11:31:42.916048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.501 [2024-10-07 11:31:42.916066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.501 [2024-10-07 11:31:42.916080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.501 [2024-10-07 11:31:42.916110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.501 [2024-10-07 11:31:42.922407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.501 [2024-10-07 11:31:42.922529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.501 [2024-10-07 11:31:42.922561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.501 [2024-10-07 11:31:42.922579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.501 [2024-10-07 11:31:42.922611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.502 [2024-10-07 11:31:42.922643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.502 [2024-10-07 11:31:42.922661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.502 [2024-10-07 11:31:42.922677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.502 [2024-10-07 11:31:42.922707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.502 [2024-10-07 11:31:42.925913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.502 [2024-10-07 11:31:42.926023] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.502 [2024-10-07 11:31:42.926054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.502 [2024-10-07 11:31:42.926071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.502 [2024-10-07 11:31:42.926103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.502 [2024-10-07 11:31:42.926135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.502 [2024-10-07 11:31:42.926153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.502 [2024-10-07 11:31:42.926175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.502 [2024-10-07 11:31:42.926205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.502 [2024-10-07 11:31:42.932519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.502 [2024-10-07 11:31:42.932631] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.502 [2024-10-07 11:31:42.932662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.502 [2024-10-07 11:31:42.932679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.502 [2024-10-07 11:31:42.932865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.502 [2024-10-07 11:31:42.933008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.502 [2024-10-07 11:31:42.933043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.502 [2024-10-07 11:31:42.933061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.502 [2024-10-07 11:31:42.933178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.502 [2024-10-07 11:31:42.936409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.502 [2024-10-07 11:31:42.936528] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.502 [2024-10-07 11:31:42.936559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.502 [2024-10-07 11:31:42.936577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.502 [2024-10-07 11:31:42.936609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.502 [2024-10-07 11:31:42.936657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.502 [2024-10-07 11:31:42.936679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.502 [2024-10-07 11:31:42.936694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.502 [2024-10-07 11:31:42.936726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.502 [2024-10-07 11:31:42.943809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.502 [2024-10-07 11:31:42.943924] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.502 [2024-10-07 11:31:42.943955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.502 [2024-10-07 11:31:42.943986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.502 [2024-10-07 11:31:42.944020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.502 [2024-10-07 11:31:42.944052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.502 [2024-10-07 11:31:42.944070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.502 [2024-10-07 11:31:42.944084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.502 [2024-10-07 11:31:42.944115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.502 [2024-10-07 11:31:42.946495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.502 [2024-10-07 11:31:42.946606] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.502 [2024-10-07 11:31:42.946637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.502 [2024-10-07 11:31:42.946654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.502 [2024-10-07 11:31:42.946685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.502 [2024-10-07 11:31:42.946718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.502 [2024-10-07 11:31:42.946736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.502 [2024-10-07 11:31:42.946750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.502 [2024-10-07 11:31:42.946780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.502 [2024-10-07 11:31:42.953903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.502 [2024-10-07 11:31:42.954016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.502 [2024-10-07 11:31:42.954047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.502 [2024-10-07 11:31:42.954064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.502 [2024-10-07 11:31:42.954096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.502 [2024-10-07 11:31:42.954127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.502 [2024-10-07 11:31:42.954146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.502 [2024-10-07 11:31:42.954160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.502 [2024-10-07 11:31:42.954190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.502 [2024-10-07 11:31:42.957852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.502 [2024-10-07 11:31:42.957961] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.502 [2024-10-07 11:31:42.957993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.502 [2024-10-07 11:31:42.958010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.502 [2024-10-07 11:31:42.958041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.502 [2024-10-07 11:31:42.958073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.502 [2024-10-07 11:31:42.958107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.502 [2024-10-07 11:31:42.958123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.502 [2024-10-07 11:31:42.958154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.502 [2024-10-07 11:31:42.964524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.502 [2024-10-07 11:31:42.964647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.502 [2024-10-07 11:31:42.964678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.502 [2024-10-07 11:31:42.964696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.502 [2024-10-07 11:31:42.964728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.502 [2024-10-07 11:31:42.964760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.502 [2024-10-07 11:31:42.964777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.502 [2024-10-07 11:31:42.964791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.502 [2024-10-07 11:31:42.964821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.502 [2024-10-07 11:31:42.967939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.502 [2024-10-07 11:31:42.968050] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.502 [2024-10-07 11:31:42.968082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.502 [2024-10-07 11:31:42.968099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.502 [2024-10-07 11:31:42.968130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.502 [2024-10-07 11:31:42.968162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.502 [2024-10-07 11:31:42.968181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.502 [2024-10-07 11:31:42.968195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.502 [2024-10-07 11:31:42.968225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.502 [2024-10-07 11:31:42.974619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.502 [2024-10-07 11:31:42.974731] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.502 [2024-10-07 11:31:42.974762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.502 [2024-10-07 11:31:42.974779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.502 [2024-10-07 11:31:42.974812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.502 [2024-10-07 11:31:42.974844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.502 [2024-10-07 11:31:42.974862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.502 [2024-10-07 11:31:42.974876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.502 [2024-10-07 11:31:42.975060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.502 [2024-10-07 11:31:42.978571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.502 [2024-10-07 11:31:42.978708] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.502 [2024-10-07 11:31:42.978742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.502 [2024-10-07 11:31:42.978759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.502 [2024-10-07 11:31:42.978791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.502 [2024-10-07 11:31:42.978823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.502 [2024-10-07 11:31:42.978841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.502 [2024-10-07 11:31:42.978856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.502 [2024-10-07 11:31:42.978886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.502 [2024-10-07 11:31:42.985945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.502 [2024-10-07 11:31:42.986060] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.502 [2024-10-07 11:31:42.986091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.502 [2024-10-07 11:31:42.986109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.502 [2024-10-07 11:31:42.986141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.502 [2024-10-07 11:31:42.986173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.502 [2024-10-07 11:31:42.986191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.502 [2024-10-07 11:31:42.986205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.502 [2024-10-07 11:31:42.986235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.502 [2024-10-07 11:31:42.988673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.502 [2024-10-07 11:31:42.988781] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.502 [2024-10-07 11:31:42.988813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.502 [2024-10-07 11:31:42.988831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.502 [2024-10-07 11:31:42.988862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.502 [2024-10-07 11:31:42.989048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.502 [2024-10-07 11:31:42.989084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.502 [2024-10-07 11:31:42.989102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.502 [2024-10-07 11:31:42.989234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.502 [2024-10-07 11:31:42.996040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.502 [2024-10-07 11:31:42.996154] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.502 [2024-10-07 11:31:42.996187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.502 [2024-10-07 11:31:42.996205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.502 [2024-10-07 11:31:42.996253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.502 [2024-10-07 11:31:42.996286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.502 [2024-10-07 11:31:42.996304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.502 [2024-10-07 11:31:42.996332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.502 [2024-10-07 11:31:42.996367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.502 [2024-10-07 11:31:42.999951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.502 [2024-10-07 11:31:43.000062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.502 [2024-10-07 11:31:43.000092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.503 [2024-10-07 11:31:43.000110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.503 [2024-10-07 11:31:43.000142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.503 [2024-10-07 11:31:43.000174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.503 [2024-10-07 11:31:43.000192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.503 [2024-10-07 11:31:43.000206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.503 [2024-10-07 11:31:43.000236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.503 [2024-10-07 11:31:43.006556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.503 [2024-10-07 11:31:43.006677] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.503 [2024-10-07 11:31:43.006710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.503 [2024-10-07 11:31:43.006727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.503 [2024-10-07 11:31:43.006759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.503 [2024-10-07 11:31:43.006791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.503 [2024-10-07 11:31:43.006809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.503 [2024-10-07 11:31:43.006823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.503 [2024-10-07 11:31:43.006853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.503 [2024-10-07 11:31:43.010041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.503 [2024-10-07 11:31:43.010151] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.503 [2024-10-07 11:31:43.010182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.503 [2024-10-07 11:31:43.010200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.503 [2024-10-07 11:31:43.010231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.503 [2024-10-07 11:31:43.010263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.503 [2024-10-07 11:31:43.010281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.503 [2024-10-07 11:31:43.010356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.503 [2024-10-07 11:31:43.010393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.503 [2024-10-07 11:31:43.016653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.503 [2024-10-07 11:31:43.016766] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.503 [2024-10-07 11:31:43.016798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.503 [2024-10-07 11:31:43.016816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.503 [2024-10-07 11:31:43.016848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.503 [2024-10-07 11:31:43.016879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.503 [2024-10-07 11:31:43.016897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.503 [2024-10-07 11:31:43.016911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.503 [2024-10-07 11:31:43.016942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.503 [2024-10-07 11:31:43.020727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.503 [2024-10-07 11:31:43.020848] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.503 [2024-10-07 11:31:43.020879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.503 [2024-10-07 11:31:43.020896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.503 [2024-10-07 11:31:43.020928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.503 [2024-10-07 11:31:43.020961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.503 [2024-10-07 11:31:43.020979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.503 [2024-10-07 11:31:43.020994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.503 [2024-10-07 11:31:43.021024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.503 [2024-10-07 11:31:43.028051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.503 [2024-10-07 11:31:43.028219] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.503 [2024-10-07 11:31:43.028252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.503 [2024-10-07 11:31:43.028270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.503 [2024-10-07 11:31:43.028302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.503 [2024-10-07 11:31:43.028351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.503 [2024-10-07 11:31:43.028370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.503 [2024-10-07 11:31:43.028385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.503 [2024-10-07 11:31:43.028415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.503 [2024-10-07 11:31:43.030821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.503 [2024-10-07 11:31:43.030930] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.503 [2024-10-07 11:31:43.030976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.503 [2024-10-07 11:31:43.030995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.503 [2024-10-07 11:31:43.031027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.503 [2024-10-07 11:31:43.031213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.503 [2024-10-07 11:31:43.031242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.503 [2024-10-07 11:31:43.031258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.503 [2024-10-07 11:31:43.031406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.503 [2024-10-07 11:31:43.038144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.503 [2024-10-07 11:31:43.038257] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.503 [2024-10-07 11:31:43.038300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.503 [2024-10-07 11:31:43.038344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.503 [2024-10-07 11:31:43.038380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.503 [2024-10-07 11:31:43.038413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.503 [2024-10-07 11:31:43.038431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.503 [2024-10-07 11:31:43.038445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.503 [2024-10-07 11:31:43.038475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.503 [2024-10-07 11:31:43.042146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.503 [2024-10-07 11:31:43.042256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.503 [2024-10-07 11:31:43.042298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.503 [2024-10-07 11:31:43.042330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.503 [2024-10-07 11:31:43.042366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.503 [2024-10-07 11:31:43.042399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.503 [2024-10-07 11:31:43.042417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.503 [2024-10-07 11:31:43.042431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.503 [2024-10-07 11:31:43.042461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.503 [2024-10-07 11:31:43.048725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.503 [2024-10-07 11:31:43.048846] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.503 [2024-10-07 11:31:43.048877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.503 [2024-10-07 11:31:43.048894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.503 [2024-10-07 11:31:43.048926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.503 [2024-10-07 11:31:43.048993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.503 [2024-10-07 11:31:43.049015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.503 [2024-10-07 11:31:43.049030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.503 [2024-10-07 11:31:43.049059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.503 [2024-10-07 11:31:43.052232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.503 [2024-10-07 11:31:43.052356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.503 [2024-10-07 11:31:43.052389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.503 [2024-10-07 11:31:43.052406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.503 [2024-10-07 11:31:43.052439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.503 [2024-10-07 11:31:43.052471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.503 [2024-10-07 11:31:43.052490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.503 [2024-10-07 11:31:43.052504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.503 [2024-10-07 11:31:43.052534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.503 [2024-10-07 11:31:43.058814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.503 [2024-10-07 11:31:43.058926] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.503 [2024-10-07 11:31:43.058958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.503 [2024-10-07 11:31:43.058975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.503 [2024-10-07 11:31:43.059008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.503 [2024-10-07 11:31:43.059194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.503 [2024-10-07 11:31:43.059221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.503 [2024-10-07 11:31:43.059235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.503 [2024-10-07 11:31:43.059387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.503 [2024-10-07 11:31:43.062753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.503 [2024-10-07 11:31:43.062873] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.503 [2024-10-07 11:31:43.062905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.503 [2024-10-07 11:31:43.062922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.503 [2024-10-07 11:31:43.062955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.503 [2024-10-07 11:31:43.062987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.503 [2024-10-07 11:31:43.063005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.503 [2024-10-07 11:31:43.063019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.503 [2024-10-07 11:31:43.063049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.503 [2024-10-07 11:31:43.070097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.503 [2024-10-07 11:31:43.070217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.503 [2024-10-07 11:31:43.070249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.503 [2024-10-07 11:31:43.070266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.503 [2024-10-07 11:31:43.070311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.503 [2024-10-07 11:31:43.070365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.503 [2024-10-07 11:31:43.070383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.503 [2024-10-07 11:31:43.070397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.503 [2024-10-07 11:31:43.070428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.503 [2024-10-07 11:31:43.072844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.503 [2024-10-07 11:31:43.072952] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.503 [2024-10-07 11:31:43.072982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.503 [2024-10-07 11:31:43.073000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.503 [2024-10-07 11:31:43.073031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.503 [2024-10-07 11:31:43.073218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.504 [2024-10-07 11:31:43.073247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.504 [2024-10-07 11:31:43.073262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.504 [2024-10-07 11:31:43.073407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.504 [2024-10-07 11:31:43.080190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.504 [2024-10-07 11:31:43.080303] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.504 [2024-10-07 11:31:43.080349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.504 [2024-10-07 11:31:43.080367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.504 [2024-10-07 11:31:43.080399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.504 [2024-10-07 11:31:43.080432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.504 [2024-10-07 11:31:43.080450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.504 [2024-10-07 11:31:43.080464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.504 [2024-10-07 11:31:43.080494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.504 [2024-10-07 11:31:43.084142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.504 [2024-10-07 11:31:43.084254] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.504 [2024-10-07 11:31:43.084284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.504 [2024-10-07 11:31:43.084341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.504 [2024-10-07 11:31:43.084376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.504 [2024-10-07 11:31:43.084408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.504 [2024-10-07 11:31:43.084427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.504 [2024-10-07 11:31:43.084441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.504 [2024-10-07 11:31:43.084472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.504 [2024-10-07 11:31:43.090785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.504 [2024-10-07 11:31:43.090910] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.504 [2024-10-07 11:31:43.090941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.504 [2024-10-07 11:31:43.090959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.504 [2024-10-07 11:31:43.090991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.504 [2024-10-07 11:31:43.091023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.504 [2024-10-07 11:31:43.091041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.504 [2024-10-07 11:31:43.091055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.504 [2024-10-07 11:31:43.091086] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.504 [2024-10-07 11:31:43.094234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.504 [2024-10-07 11:31:43.094374] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.504 [2024-10-07 11:31:43.094419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.504 [2024-10-07 11:31:43.094436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.504 [2024-10-07 11:31:43.094469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.504 [2024-10-07 11:31:43.094501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.504 [2024-10-07 11:31:43.094520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.504 [2024-10-07 11:31:43.094534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.504 [2024-10-07 11:31:43.094565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.504 [2024-10-07 11:31:43.100883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.504 [2024-10-07 11:31:43.101010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.504 [2024-10-07 11:31:43.101041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.504 [2024-10-07 11:31:43.101059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.504 [2024-10-07 11:31:43.101090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.504 [2024-10-07 11:31:43.101122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.504 [2024-10-07 11:31:43.101157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.504 [2024-10-07 11:31:43.101172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.504 [2024-10-07 11:31:43.101381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.504 [2024-10-07 11:31:43.104853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.504 [2024-10-07 11:31:43.104974] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.504 [2024-10-07 11:31:43.105005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.504 [2024-10-07 11:31:43.105022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.504 [2024-10-07 11:31:43.105070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.504 [2024-10-07 11:31:43.105106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.504 [2024-10-07 11:31:43.105124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.504 [2024-10-07 11:31:43.105138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.504 [2024-10-07 11:31:43.105168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.504 [2024-10-07 11:31:43.112250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.504 [2024-10-07 11:31:43.112414] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.504 [2024-10-07 11:31:43.112447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.504 [2024-10-07 11:31:43.112465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.504 [2024-10-07 11:31:43.112497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.504 [2024-10-07 11:31:43.112530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.504 [2024-10-07 11:31:43.112548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.504 [2024-10-07 11:31:43.112562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.504 [2024-10-07 11:31:43.112593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.504 [2024-10-07 11:31:43.114943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.504 [2024-10-07 11:31:43.115054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.504 [2024-10-07 11:31:43.115085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.504 [2024-10-07 11:31:43.115102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.504 [2024-10-07 11:31:43.115133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.504 [2024-10-07 11:31:43.115165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.504 [2024-10-07 11:31:43.115184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.504 [2024-10-07 11:31:43.115198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.504 [2024-10-07 11:31:43.115228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.504 [2024-10-07 11:31:43.122360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.504 [2024-10-07 11:31:43.122490] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.504 [2024-10-07 11:31:43.122522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.504 [2024-10-07 11:31:43.122540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.504 [2024-10-07 11:31:43.122572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.504 [2024-10-07 11:31:43.122604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.504 [2024-10-07 11:31:43.122621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.504 [2024-10-07 11:31:43.122636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.504 [2024-10-07 11:31:43.122666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.504 [2024-10-07 11:31:43.126411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.504 [2024-10-07 11:31:43.126525] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.504 [2024-10-07 11:31:43.126555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.504 [2024-10-07 11:31:43.126573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.504 [2024-10-07 11:31:43.126604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.504 [2024-10-07 11:31:43.126636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.504 [2024-10-07 11:31:43.126655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.504 [2024-10-07 11:31:43.126669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.504 [2024-10-07 11:31:43.126698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.504 [2024-10-07 11:31:43.133034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.504 [2024-10-07 11:31:43.133158] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.504 [2024-10-07 11:31:43.133189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.504 [2024-10-07 11:31:43.133207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.504 [2024-10-07 11:31:43.133244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.504 [2024-10-07 11:31:43.133277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.504 [2024-10-07 11:31:43.133295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.504 [2024-10-07 11:31:43.133309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.504 [2024-10-07 11:31:43.133356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.504 [2024-10-07 11:31:43.136500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.504 [2024-10-07 11:31:43.136612] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.504 [2024-10-07 11:31:43.136642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.504 [2024-10-07 11:31:43.136660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.504 [2024-10-07 11:31:43.136709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.504 [2024-10-07 11:31:43.136743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.504 [2024-10-07 11:31:43.136761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.504 [2024-10-07 11:31:43.136775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.504 [2024-10-07 11:31:43.136805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.504 [2024-10-07 11:31:43.143124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.504 [2024-10-07 11:31:43.143237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.504 [2024-10-07 11:31:43.143268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.504 [2024-10-07 11:31:43.143285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.504 [2024-10-07 11:31:43.143331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.504 [2024-10-07 11:31:43.143368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.504 [2024-10-07 11:31:43.143386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.504 [2024-10-07 11:31:43.143400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.504 [2024-10-07 11:31:43.143584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.504 [2024-10-07 11:31:43.147084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.504 [2024-10-07 11:31:43.147204] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.504 [2024-10-07 11:31:43.147234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.504 [2024-10-07 11:31:43.147252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.504 [2024-10-07 11:31:43.147283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.504 [2024-10-07 11:31:43.147329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.504 [2024-10-07 11:31:43.147350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.504 [2024-10-07 11:31:43.147364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.504 [2024-10-07 11:31:43.147395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.504 [2024-10-07 11:31:43.154484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.504 [2024-10-07 11:31:43.154599] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.504 [2024-10-07 11:31:43.154630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.504 [2024-10-07 11:31:43.154647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.504 [2024-10-07 11:31:43.154680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.504 [2024-10-07 11:31:43.154712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.504 [2024-10-07 11:31:43.154730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.504 [2024-10-07 11:31:43.154759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.504 [2024-10-07 11:31:43.154791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.504 [2024-10-07 11:31:43.157173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.504 [2024-10-07 11:31:43.157282] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.504 [2024-10-07 11:31:43.157313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.504 [2024-10-07 11:31:43.157344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.504 [2024-10-07 11:31:43.157377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.504 [2024-10-07 11:31:43.157410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.505 [2024-10-07 11:31:43.157428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.505 [2024-10-07 11:31:43.157442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.505 [2024-10-07 11:31:43.157626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.505 [2024-10-07 11:31:43.164580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.505 [2024-10-07 11:31:43.164693] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.505 [2024-10-07 11:31:43.164725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.505 [2024-10-07 11:31:43.164742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.505 [2024-10-07 11:31:43.164774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.505 [2024-10-07 11:31:43.164806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.505 [2024-10-07 11:31:43.164824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.505 [2024-10-07 11:31:43.164838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.505 [2024-10-07 11:31:43.164868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.505 [2024-10-07 11:31:43.168537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.505 [2024-10-07 11:31:43.168650] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.505 [2024-10-07 11:31:43.168681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.505 [2024-10-07 11:31:43.168698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.505 [2024-10-07 11:31:43.168729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.505 [2024-10-07 11:31:43.168762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.505 [2024-10-07 11:31:43.168780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.505 [2024-10-07 11:31:43.168795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.505 [2024-10-07 11:31:43.168825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.505 [2024-10-07 11:31:43.175102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.505 [2024-10-07 11:31:43.175223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.505 [2024-10-07 11:31:43.175270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.505 [2024-10-07 11:31:43.175289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.505 [2024-10-07 11:31:43.175337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.505 [2024-10-07 11:31:43.175374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.505 [2024-10-07 11:31:43.175392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.505 [2024-10-07 11:31:43.175406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.505 [2024-10-07 11:31:43.175436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.505 [2024-10-07 11:31:43.178626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.505 [2024-10-07 11:31:43.178738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.505 [2024-10-07 11:31:43.178769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.505 [2024-10-07 11:31:43.178787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.505 [2024-10-07 11:31:43.178818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.505 [2024-10-07 11:31:43.178850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.505 [2024-10-07 11:31:43.178868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.505 [2024-10-07 11:31:43.178883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.505 [2024-10-07 11:31:43.178930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.505 [2024-10-07 11:31:43.185192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.505 [2024-10-07 11:31:43.185306] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.505 [2024-10-07 11:31:43.185350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.505 [2024-10-07 11:31:43.185369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.505 [2024-10-07 11:31:43.185555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.505 [2024-10-07 11:31:43.185697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.505 [2024-10-07 11:31:43.185732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.505 [2024-10-07 11:31:43.185750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.505 [2024-10-07 11:31:43.185868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.505 [2024-10-07 11:31:43.189040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.505 [2024-10-07 11:31:43.189159] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.505 [2024-10-07 11:31:43.189189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.505 [2024-10-07 11:31:43.189207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.505 [2024-10-07 11:31:43.189239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.505 [2024-10-07 11:31:43.189286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.505 [2024-10-07 11:31:43.189305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.505 [2024-10-07 11:31:43.189335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.505 [2024-10-07 11:31:43.189368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.505 [2024-10-07 11:31:43.196468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.505 [2024-10-07 11:31:43.196585] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.505 [2024-10-07 11:31:43.196617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.505 [2024-10-07 11:31:43.196634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.505 [2024-10-07 11:31:43.196667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.505 [2024-10-07 11:31:43.196699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.505 [2024-10-07 11:31:43.196717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.505 [2024-10-07 11:31:43.196731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.505 [2024-10-07 11:31:43.196762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.505 [2024-10-07 11:31:43.199127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.505 [2024-10-07 11:31:43.199238] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.505 [2024-10-07 11:31:43.199268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.505 [2024-10-07 11:31:43.199285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.505 [2024-10-07 11:31:43.199330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.505 [2024-10-07 11:31:43.199366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.505 [2024-10-07 11:31:43.199384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.505 [2024-10-07 11:31:43.199399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.505 [2024-10-07 11:31:43.199428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.505 [2024-10-07 11:31:43.207530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.505 [2024-10-07 11:31:43.207645] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.506 [2024-10-07 11:31:43.207676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.506 [2024-10-07 11:31:43.207694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.506 [2024-10-07 11:31:43.207726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.506 [2024-10-07 11:31:43.207758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.506 [2024-10-07 11:31:43.207776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.506 [2024-10-07 11:31:43.207790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.506 [2024-10-07 11:31:43.207839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.506 [2024-10-07 11:31:43.210492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.506 [2024-10-07 11:31:43.210603] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.506 [2024-10-07 11:31:43.210634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.506 [2024-10-07 11:31:43.210652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.506 [2024-10-07 11:31:43.210683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.506 [2024-10-07 11:31:43.211503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.506 [2024-10-07 11:31:43.211540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.506 [2024-10-07 11:31:43.211558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.506 [2024-10-07 11:31:43.212438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.506 [2024-10-07 11:31:43.217620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.506 [2024-10-07 11:31:43.217738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.506 [2024-10-07 11:31:43.217769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.506 [2024-10-07 11:31:43.217787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.506 [2024-10-07 11:31:43.217818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.506 [2024-10-07 11:31:43.217850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.506 [2024-10-07 11:31:43.217868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.506 [2024-10-07 11:31:43.217882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.506 [2024-10-07 11:31:43.217912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.506 [2024-10-07 11:31:43.220584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.506 [2024-10-07 11:31:43.220693] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.506 [2024-10-07 11:31:43.220724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.506 [2024-10-07 11:31:43.220742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.506 [2024-10-07 11:31:43.220773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.506 [2024-10-07 11:31:43.220805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.506 [2024-10-07 11:31:43.220822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.506 [2024-10-07 11:31:43.220837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.506 [2024-10-07 11:31:43.220867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.506 [2024-10-07 11:31:43.227929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.506 [2024-10-07 11:31:43.228043] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.506 [2024-10-07 11:31:43.228074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.506 [2024-10-07 11:31:43.228106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.506 [2024-10-07 11:31:43.228140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.506 [2024-10-07 11:31:43.228173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.506 [2024-10-07 11:31:43.228190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.506 [2024-10-07 11:31:43.228205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.506 [2024-10-07 11:31:43.228235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.506 [2024-10-07 11:31:43.230673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.506 [2024-10-07 11:31:43.230784] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.506 [2024-10-07 11:31:43.230815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.506 [2024-10-07 11:31:43.230832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.506 [2024-10-07 11:31:43.230863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.506 [2024-10-07 11:31:43.231050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.506 [2024-10-07 11:31:43.231076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.506 [2024-10-07 11:31:43.231091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.506 [2024-10-07 11:31:43.231221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.506 [2024-10-07 11:31:43.238019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.506 [2024-10-07 11:31:43.238133] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.506 [2024-10-07 11:31:43.238165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.506 [2024-10-07 11:31:43.238182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.506 [2024-10-07 11:31:43.238214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.506 [2024-10-07 11:31:43.238246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.506 [2024-10-07 11:31:43.238263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.506 [2024-10-07 11:31:43.238277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.506 [2024-10-07 11:31:43.238337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.506 [2024-10-07 11:31:43.241986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.506 [2024-10-07 11:31:43.242097] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.506 [2024-10-07 11:31:43.242127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.506 [2024-10-07 11:31:43.242144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.506 [2024-10-07 11:31:43.242176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.506 [2024-10-07 11:31:43.242208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.506 [2024-10-07 11:31:43.242242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.506 [2024-10-07 11:31:43.242258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.506 [2024-10-07 11:31:43.242302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.506 [2024-10-07 11:31:43.248577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.506 [2024-10-07 11:31:43.248697] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.506 [2024-10-07 11:31:43.248728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.506 [2024-10-07 11:31:43.248746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.506 [2024-10-07 11:31:43.248778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.506 [2024-10-07 11:31:43.248826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.506 [2024-10-07 11:31:43.248848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.506 [2024-10-07 11:31:43.248863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.506 [2024-10-07 11:31:43.248893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.506 [2024-10-07 11:31:43.252076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.506 [2024-10-07 11:31:43.252188] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.506 [2024-10-07 11:31:43.252220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.506 [2024-10-07 11:31:43.252237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.506 [2024-10-07 11:31:43.252269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.506 [2024-10-07 11:31:43.252301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.506 [2024-10-07 11:31:43.252334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.506 [2024-10-07 11:31:43.252350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.506 [2024-10-07 11:31:43.252382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.506 [2024-10-07 11:31:43.258669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.506 [2024-10-07 11:31:43.258782] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.506 [2024-10-07 11:31:43.258813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.506 [2024-10-07 11:31:43.258830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.506 [2024-10-07 11:31:43.258862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.506 [2024-10-07 11:31:43.258894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.506 [2024-10-07 11:31:43.258912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.506 [2024-10-07 11:31:43.258926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.506 [2024-10-07 11:31:43.259110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.506 [2024-10-07 11:31:43.262660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.506 [2024-10-07 11:31:43.262797] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.506 [2024-10-07 11:31:43.262828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.506 [2024-10-07 11:31:43.262846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.506 [2024-10-07 11:31:43.262878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.506 [2024-10-07 11:31:43.262920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.506 [2024-10-07 11:31:43.262938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.506 [2024-10-07 11:31:43.262953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.506 [2024-10-07 11:31:43.262984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.506 [2024-10-07 11:31:43.270067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.506 [2024-10-07 11:31:43.270232] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.506 [2024-10-07 11:31:43.270277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.506 [2024-10-07 11:31:43.270339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.506 [2024-10-07 11:31:43.270394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.506 [2024-10-07 11:31:43.270444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.506 [2024-10-07 11:31:43.270472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.506 [2024-10-07 11:31:43.270494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.506 [2024-10-07 11:31:43.270562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.506 [2024-10-07 11:31:43.272786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.506 [2024-10-07 11:31:43.272946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.506 [2024-10-07 11:31:43.272988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.506 [2024-10-07 11:31:43.273014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.506 [2024-10-07 11:31:43.273287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.506 [2024-10-07 11:31:43.273510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.506 [2024-10-07 11:31:43.273570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.506 [2024-10-07 11:31:43.273596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.506 [2024-10-07 11:31:43.273764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.506 [2024-10-07 11:31:43.280190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.506 [2024-10-07 11:31:43.280368] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.507 [2024-10-07 11:31:43.280413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.507 [2024-10-07 11:31:43.280443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.507 [2024-10-07 11:31:43.280536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.507 [2024-10-07 11:31:43.280595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.507 [2024-10-07 11:31:43.280630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.507 [2024-10-07 11:31:43.280657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.507 [2024-10-07 11:31:43.282134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.507 [2024-10-07 11:31:43.285813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.507 [2024-10-07 11:31:43.286861] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.507 [2024-10-07 11:31:43.286922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.507 [2024-10-07 11:31:43.286953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.507 [2024-10-07 11:31:43.287086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.507 [2024-10-07 11:31:43.287375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.507 [2024-10-07 11:31:43.287434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.507 [2024-10-07 11:31:43.287468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.507 [2024-10-07 11:31:43.287657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.507 [2024-10-07 11:31:43.291067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.507 [2024-10-07 11:31:43.291263] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.507 [2024-10-07 11:31:43.291333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.507 [2024-10-07 11:31:43.291367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.507 [2024-10-07 11:31:43.291407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.507 [2024-10-07 11:31:43.292627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.507 [2024-10-07 11:31:43.292667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.507 [2024-10-07 11:31:43.292685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.507 [2024-10-07 11:31:43.292903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.507 [2024-10-07 11:31:43.297008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.507 [2024-10-07 11:31:43.298347] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.507 [2024-10-07 11:31:43.298395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.507 [2024-10-07 11:31:43.298416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.507 [2024-10-07 11:31:43.298569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.507 [2024-10-07 11:31:43.298612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.507 [2024-10-07 11:31:43.298631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.507 [2024-10-07 11:31:43.298661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.507 [2024-10-07 11:31:43.298696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.507 [2024-10-07 11:31:43.301191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.507 [2024-10-07 11:31:43.301301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.507 [2024-10-07 11:31:43.301347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.507 [2024-10-07 11:31:43.301366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.507 [2024-10-07 11:31:43.301553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.507 [2024-10-07 11:31:43.301696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.507 [2024-10-07 11:31:43.301731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.507 [2024-10-07 11:31:43.301748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.507 [2024-10-07 11:31:43.301877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.507 [2024-10-07 11:31:43.307112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.507 [2024-10-07 11:31:43.307226] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.507 [2024-10-07 11:31:43.307258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.507 [2024-10-07 11:31:43.307276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.507 [2024-10-07 11:31:43.307308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.507 [2024-10-07 11:31:43.308208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.507 [2024-10-07 11:31:43.308246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.507 [2024-10-07 11:31:43.308263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.507 [2024-10-07 11:31:43.308464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.507 [2024-10-07 11:31:43.312577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.507 [2024-10-07 11:31:43.312694] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.507 [2024-10-07 11:31:43.312726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.507 [2024-10-07 11:31:43.312743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.507 [2024-10-07 11:31:43.312776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.507 [2024-10-07 11:31:43.312808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.507 [2024-10-07 11:31:43.312826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.507 [2024-10-07 11:31:43.312841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.507 [2024-10-07 11:31:43.312872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.507 [2024-10-07 11:31:43.318041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.507 [2024-10-07 11:31:43.318156] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.507 [2024-10-07 11:31:43.318205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.507 [2024-10-07 11:31:43.318225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.507 [2024-10-07 11:31:43.318258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.507 [2024-10-07 11:31:43.318309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.507 [2024-10-07 11:31:43.318352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.507 [2024-10-07 11:31:43.318367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.507 [2024-10-07 11:31:43.318400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.507 [2024-10-07 11:31:43.322672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.507 [2024-10-07 11:31:43.322787] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.507 [2024-10-07 11:31:43.322819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.507 [2024-10-07 11:31:43.322837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.507 [2024-10-07 11:31:43.322870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.507 [2024-10-07 11:31:43.322902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.507 [2024-10-07 11:31:43.322919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.507 [2024-10-07 11:31:43.322934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.507 [2024-10-07 11:31:43.322964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.507 [2024-10-07 11:31:43.328959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.507 [2024-10-07 11:31:43.329073] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.507 [2024-10-07 11:31:43.329105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.507 [2024-10-07 11:31:43.329123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.507 [2024-10-07 11:31:43.329155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.507 [2024-10-07 11:31:43.329188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.507 [2024-10-07 11:31:43.329206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.507 [2024-10-07 11:31:43.329221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.507 [2024-10-07 11:31:43.329250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.507 [2024-10-07 11:31:43.333472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.507 [2024-10-07 11:31:43.333593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.507 [2024-10-07 11:31:43.333625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.507 [2024-10-07 11:31:43.333642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.507 [2024-10-07 11:31:43.333675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.507 [2024-10-07 11:31:43.333726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.507 [2024-10-07 11:31:43.333746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.507 [2024-10-07 11:31:43.333760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.507 [2024-10-07 11:31:43.333791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.507 [2024-10-07 11:31:43.340883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.507 [2024-10-07 11:31:43.341053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.507 [2024-10-07 11:31:43.341087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.507 [2024-10-07 11:31:43.341105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.507 [2024-10-07 11:31:43.341138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.507 [2024-10-07 11:31:43.341170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.507 [2024-10-07 11:31:43.341189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.507 [2024-10-07 11:31:43.341204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.507 [2024-10-07 11:31:43.341235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.507 [2024-10-07 11:31:43.343564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.507 [2024-10-07 11:31:43.343676] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.507 [2024-10-07 11:31:43.343707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.507 [2024-10-07 11:31:43.343724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.507 [2024-10-07 11:31:43.343756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.507 [2024-10-07 11:31:43.343788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.507 [2024-10-07 11:31:43.343807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.507 [2024-10-07 11:31:43.343821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.507 [2024-10-07 11:31:43.343850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.507 [2024-10-07 11:31:43.351005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.507 [2024-10-07 11:31:43.351118] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.507 [2024-10-07 11:31:43.351150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.507 [2024-10-07 11:31:43.351167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.507 [2024-10-07 11:31:43.351200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.507 [2024-10-07 11:31:43.351232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.507 [2024-10-07 11:31:43.351251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.507 [2024-10-07 11:31:43.351265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.507 [2024-10-07 11:31:43.351329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.507 [2024-10-07 11:31:43.355141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.507 [2024-10-07 11:31:43.355257] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.507 [2024-10-07 11:31:43.355293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.507 [2024-10-07 11:31:43.355310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.507 [2024-10-07 11:31:43.355359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.507 [2024-10-07 11:31:43.355392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.507 [2024-10-07 11:31:43.355410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.507 [2024-10-07 11:31:43.355424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.507 [2024-10-07 11:31:43.355454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.507 [2024-10-07 11:31:43.361819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.507 [2024-10-07 11:31:43.361959] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.507 [2024-10-07 11:31:43.361991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.507 [2024-10-07 11:31:43.362009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.507 [2024-10-07 11:31:43.362041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.507 [2024-10-07 11:31:43.362073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.507 [2024-10-07 11:31:43.362091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.507 [2024-10-07 11:31:43.362106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.507 [2024-10-07 11:31:43.362136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.507 [2024-10-07 11:31:43.365231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.507 [2024-10-07 11:31:43.365356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.507 [2024-10-07 11:31:43.365389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.507 [2024-10-07 11:31:43.365407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.507 [2024-10-07 11:31:43.365439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.507 [2024-10-07 11:31:43.365472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.507 [2024-10-07 11:31:43.365490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.507 [2024-10-07 11:31:43.365504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.507 [2024-10-07 11:31:43.365534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.508 [2024-10-07 11:31:43.371909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.508 [2024-10-07 11:31:43.372023] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.508 [2024-10-07 11:31:43.372056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.508 [2024-10-07 11:31:43.372090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.508 [2024-10-07 11:31:43.372124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.508 [2024-10-07 11:31:43.372157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.508 [2024-10-07 11:31:43.372176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.508 [2024-10-07 11:31:43.372190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.508 [2024-10-07 11:31:43.372220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.508 [2024-10-07 11:31:43.376037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.508 [2024-10-07 11:31:43.376160] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.508 [2024-10-07 11:31:43.376192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.508 [2024-10-07 11:31:43.376210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.508 [2024-10-07 11:31:43.376242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.508 [2024-10-07 11:31:43.376275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.508 [2024-10-07 11:31:43.376294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.508 [2024-10-07 11:31:43.376308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.508 [2024-10-07 11:31:43.376355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.508 [2024-10-07 11:31:43.383415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.508 [2024-10-07 11:31:43.383568] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.508 [2024-10-07 11:31:43.383601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.508 [2024-10-07 11:31:43.383619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.508 [2024-10-07 11:31:43.383652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.508 [2024-10-07 11:31:43.383685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.508 [2024-10-07 11:31:43.383710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.508 [2024-10-07 11:31:43.383724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.508 [2024-10-07 11:31:43.383755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.508 [2024-10-07 11:31:43.386126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.508 [2024-10-07 11:31:43.386235] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.508 [2024-10-07 11:31:43.386266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.508 [2024-10-07 11:31:43.386283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.508 [2024-10-07 11:31:43.386344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.508 [2024-10-07 11:31:43.386381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.508 [2024-10-07 11:31:43.386416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.508 [2024-10-07 11:31:43.386438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.508 [2024-10-07 11:31:43.386624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.508 [2024-10-07 11:31:43.393510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.508 [2024-10-07 11:31:43.393624] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.508 [2024-10-07 11:31:43.393656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.508 [2024-10-07 11:31:43.393674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.508 [2024-10-07 11:31:43.393705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.508 [2024-10-07 11:31:43.393737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.508 [2024-10-07 11:31:43.393756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.508 [2024-10-07 11:31:43.393770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.508 [2024-10-07 11:31:43.393800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.508 [2024-10-07 11:31:43.397530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.508 [2024-10-07 11:31:43.397645] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.508 [2024-10-07 11:31:43.397676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.508 [2024-10-07 11:31:43.397693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.508 [2024-10-07 11:31:43.397725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.508 [2024-10-07 11:31:43.397758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.508 [2024-10-07 11:31:43.397776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.508 [2024-10-07 11:31:43.397790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.508 [2024-10-07 11:31:43.397820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.508 [2024-10-07 11:31:43.404165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.508 [2024-10-07 11:31:43.404286] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.508 [2024-10-07 11:31:43.404330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.508 [2024-10-07 11:31:43.404351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.508 [2024-10-07 11:31:43.404384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.508 [2024-10-07 11:31:43.404417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.508 [2024-10-07 11:31:43.404435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.508 [2024-10-07 11:31:43.404450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.508 [2024-10-07 11:31:43.404480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.508 [2024-10-07 11:31:43.407626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.508 [2024-10-07 11:31:43.407756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.508 [2024-10-07 11:31:43.407788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.508 [2024-10-07 11:31:43.407806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.508 [2024-10-07 11:31:43.407838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.508 [2024-10-07 11:31:43.407869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.508 [2024-10-07 11:31:43.407888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.508 [2024-10-07 11:31:43.407902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.508 [2024-10-07 11:31:43.407932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.508 [2024-10-07 11:31:43.414260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.508 [2024-10-07 11:31:43.414417] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.508 [2024-10-07 11:31:43.414460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.508 [2024-10-07 11:31:43.414478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.508 [2024-10-07 11:31:43.414510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.508 [2024-10-07 11:31:43.414543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.508 [2024-10-07 11:31:43.414561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.508 [2024-10-07 11:31:43.414575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.508 [2024-10-07 11:31:43.414761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.508 [2024-10-07 11:31:43.418296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.508 [2024-10-07 11:31:43.418455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.508 [2024-10-07 11:31:43.418487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.508 [2024-10-07 11:31:43.418505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.508 [2024-10-07 11:31:43.418538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.508 [2024-10-07 11:31:43.418571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.508 [2024-10-07 11:31:43.418589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.508 [2024-10-07 11:31:43.418603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.508 [2024-10-07 11:31:43.418634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.508 [2024-10-07 11:31:43.425689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.508 [2024-10-07 11:31:43.425804] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.508 [2024-10-07 11:31:43.425836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.508 [2024-10-07 11:31:43.425854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.508 [2024-10-07 11:31:43.425906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.508 [2024-10-07 11:31:43.425942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.508 [2024-10-07 11:31:43.425960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.508 [2024-10-07 11:31:43.425974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.508 [2024-10-07 11:31:43.426005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.508 [2024-10-07 11:31:43.428398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.508 [2024-10-07 11:31:43.428507] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.508 [2024-10-07 11:31:43.428538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.508 [2024-10-07 11:31:43.428556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.508 [2024-10-07 11:31:43.428587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.508 [2024-10-07 11:31:43.428620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.508 [2024-10-07 11:31:43.428637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.508 [2024-10-07 11:31:43.428652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.508 [2024-10-07 11:31:43.428836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.508 [2024-10-07 11:31:43.435785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.508 [2024-10-07 11:31:43.435898] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.508 [2024-10-07 11:31:43.435930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.508 [2024-10-07 11:31:43.435947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.508 [2024-10-07 11:31:43.435980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.508 [2024-10-07 11:31:43.436012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.508 [2024-10-07 11:31:43.436031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.508 [2024-10-07 11:31:43.436045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.508 [2024-10-07 11:31:43.436075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.508 [2024-10-07 11:31:43.439798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.508 [2024-10-07 11:31:43.439950] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.508 [2024-10-07 11:31:43.439982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.508 [2024-10-07 11:31:43.439999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.508 [2024-10-07 11:31:43.440032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.508 [2024-10-07 11:31:43.440064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.508 [2024-10-07 11:31:43.440082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.508 [2024-10-07 11:31:43.440115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.508 [2024-10-07 11:31:43.440147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.508 [2024-10-07 11:31:43.446572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.508 [2024-10-07 11:31:43.446694] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.508 [2024-10-07 11:31:43.446726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.508 [2024-10-07 11:31:43.446743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.508 [2024-10-07 11:31:43.446775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.508 [2024-10-07 11:31:43.446807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.508 [2024-10-07 11:31:43.446826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.508 [2024-10-07 11:31:43.446840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.508 [2024-10-07 11:31:43.446870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.508 [2024-10-07 11:31:43.449918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.508 [2024-10-07 11:31:43.450027] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.508 [2024-10-07 11:31:43.450057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.508 [2024-10-07 11:31:43.450075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.508 [2024-10-07 11:31:43.450106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.508 [2024-10-07 11:31:43.450139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.508 [2024-10-07 11:31:43.450157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.508 [2024-10-07 11:31:43.450172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.508 [2024-10-07 11:31:43.450201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.508 [2024-10-07 11:31:43.456662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.508 [2024-10-07 11:31:43.456774] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.508 [2024-10-07 11:31:43.456805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.508 [2024-10-07 11:31:43.456823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.508 [2024-10-07 11:31:43.456855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.508 [2024-10-07 11:31:43.456887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.508 [2024-10-07 11:31:43.456905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.508 [2024-10-07 11:31:43.456919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.508 [2024-10-07 11:31:43.456949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.508 [2024-10-07 11:31:43.460735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.508 [2024-10-07 11:31:43.460855] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.509 [2024-10-07 11:31:43.460903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.509 [2024-10-07 11:31:43.460923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.509 [2024-10-07 11:31:43.460956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.509 [2024-10-07 11:31:43.460988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.509 [2024-10-07 11:31:43.461007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.509 [2024-10-07 11:31:43.461021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.509 [2024-10-07 11:31:43.461051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.509 [2024-10-07 11:31:43.468169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.509 [2024-10-07 11:31:43.468285] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.509 [2024-10-07 11:31:43.468329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.509 [2024-10-07 11:31:43.468350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.509 [2024-10-07 11:31:43.468383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.509 [2024-10-07 11:31:43.468416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.509 [2024-10-07 11:31:43.468435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.509 [2024-10-07 11:31:43.468449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.509 [2024-10-07 11:31:43.468480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.509 [2024-10-07 11:31:43.470824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.509 [2024-10-07 11:31:43.470936] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.509 [2024-10-07 11:31:43.470967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.509 [2024-10-07 11:31:43.470984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.509 [2024-10-07 11:31:43.471030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.509 [2024-10-07 11:31:43.471066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.509 [2024-10-07 11:31:43.471084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.509 [2024-10-07 11:31:43.471098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.509 [2024-10-07 11:31:43.471128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.509 [2024-10-07 11:31:43.478378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.509 [2024-10-07 11:31:43.478490] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.509 [2024-10-07 11:31:43.478522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.509 [2024-10-07 11:31:43.478540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.509 [2024-10-07 11:31:43.478572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.509 [2024-10-07 11:31:43.478621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.509 [2024-10-07 11:31:43.478641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.509 [2024-10-07 11:31:43.478655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.509 [2024-10-07 11:31:43.478685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.509 [2024-10-07 11:31:43.482242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.509 [2024-10-07 11:31:43.482385] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.509 [2024-10-07 11:31:43.482418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.509 [2024-10-07 11:31:43.482436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.509 [2024-10-07 11:31:43.482688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.509 [2024-10-07 11:31:43.482780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.509 [2024-10-07 11:31:43.482810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.509 [2024-10-07 11:31:43.482826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.509 [2024-10-07 11:31:43.482859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.509 [2024-10-07 11:31:43.488470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.509 [2024-10-07 11:31:43.488593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.509 [2024-10-07 11:31:43.488624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.509 [2024-10-07 11:31:43.488642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.509 [2024-10-07 11:31:43.488674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.509 [2024-10-07 11:31:43.488706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.509 [2024-10-07 11:31:43.488725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.509 [2024-10-07 11:31:43.488739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.509 [2024-10-07 11:31:43.488768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.509 [2024-10-07 11:31:43.493241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.509 [2024-10-07 11:31:43.493363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.509 [2024-10-07 11:31:43.493395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.509 [2024-10-07 11:31:43.493413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.509 [2024-10-07 11:31:43.493445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.509 [2024-10-07 11:31:43.493488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.509 [2024-10-07 11:31:43.493509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.509 [2024-10-07 11:31:43.493523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.509 [2024-10-07 11:31:43.493571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.509 8750.00 IOPS, 34.18 MiB/s [2024-10-07T11:31:53.032Z] [2024-10-07 11:31:43.500672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.509 [2024-10-07 11:31:43.500884] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.509 [2024-10-07 11:31:43.500918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.509 [2024-10-07 11:31:43.500936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.509 [2024-10-07 11:31:43.501055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.509 [2024-10-07 11:31:43.501118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.509 [2024-10-07 11:31:43.501141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.509 [2024-10-07 11:31:43.501156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.509 [2024-10-07 11:31:43.501188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.509 [2024-10-07 11:31:43.503342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.509 [2024-10-07 11:31:43.503451] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.509 [2024-10-07 11:31:43.503489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.509 [2024-10-07 11:31:43.503508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.509 [2024-10-07 11:31:43.504075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.509 [2024-10-07 11:31:43.504272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.509 [2024-10-07 11:31:43.504308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.509 [2024-10-07 11:31:43.504339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.509 [2024-10-07 11:31:43.504448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.509 [2024-10-07 11:31:43.511648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.509 [2024-10-07 11:31:43.511761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.509 [2024-10-07 11:31:43.511792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.509 [2024-10-07 11:31:43.511810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.509 [2024-10-07 11:31:43.511842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.509 [2024-10-07 11:31:43.511874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.509 [2024-10-07 11:31:43.511892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.509 [2024-10-07 11:31:43.511907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.509 [2024-10-07 11:31:43.511936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.509 [2024-10-07 11:31:43.513845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.509 [2024-10-07 11:31:43.513953] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.509 [2024-10-07 11:31:43.513984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.509 [2024-10-07 11:31:43.514017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.509 [2024-10-07 11:31:43.514050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.509 [2024-10-07 11:31:43.514083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.509 [2024-10-07 11:31:43.514101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.509 [2024-10-07 11:31:43.514115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.509 [2024-10-07 11:31:43.514156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.509 [2024-10-07 11:31:43.521742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.509 [2024-10-07 11:31:43.521856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.509 [2024-10-07 11:31:43.521887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.509 [2024-10-07 11:31:43.521904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.509 [2024-10-07 11:31:43.521936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.509 [2024-10-07 11:31:43.521968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.509 [2024-10-07 11:31:43.521986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.509 [2024-10-07 11:31:43.522000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.509 [2024-10-07 11:31:43.522031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.509 [2024-10-07 11:31:43.525628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.509 [2024-10-07 11:31:43.525740] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.509 [2024-10-07 11:31:43.525771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.509 [2024-10-07 11:31:43.525788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.509 [2024-10-07 11:31:43.525821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.509 [2024-10-07 11:31:43.525853] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.509 [2024-10-07 11:31:43.525871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.509 [2024-10-07 11:31:43.525885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.509 [2024-10-07 11:31:43.525915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.509 [2024-10-07 11:31:43.532277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.509 [2024-10-07 11:31:43.532409] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.509 [2024-10-07 11:31:43.532451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.509 [2024-10-07 11:31:43.532470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.509 [2024-10-07 11:31:43.532503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.509 [2024-10-07 11:31:43.532536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.509 [2024-10-07 11:31:43.532571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.509 [2024-10-07 11:31:43.532586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.509 [2024-10-07 11:31:43.532618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.509 [2024-10-07 11:31:43.535718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.509 [2024-10-07 11:31:43.535827] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.509 [2024-10-07 11:31:43.535858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.509 [2024-10-07 11:31:43.535876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.509 [2024-10-07 11:31:43.535907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.509 [2024-10-07 11:31:43.535939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.509 [2024-10-07 11:31:43.535956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.509 [2024-10-07 11:31:43.535971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.509 [2024-10-07 11:31:43.536001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.509 [2024-10-07 11:31:43.542380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.509 [2024-10-07 11:31:43.542501] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.509 [2024-10-07 11:31:43.542532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.509 [2024-10-07 11:31:43.542551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.509 [2024-10-07 11:31:43.542583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.509 [2024-10-07 11:31:43.542615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.509 [2024-10-07 11:31:43.542633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.509 [2024-10-07 11:31:43.542647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.510 [2024-10-07 11:31:43.542677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.510 [2024-10-07 11:31:43.545806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.510 [2024-10-07 11:31:43.545917] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.510 [2024-10-07 11:31:43.545948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.510 [2024-10-07 11:31:43.545966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.510 [2024-10-07 11:31:43.545997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.510 [2024-10-07 11:31:43.546030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.510 [2024-10-07 11:31:43.546048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.510 [2024-10-07 11:31:43.546063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.510 [2024-10-07 11:31:43.546666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.510 [2024-10-07 11:31:43.554411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.510 [2024-10-07 11:31:43.554594] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.510 [2024-10-07 11:31:43.554627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.510 [2024-10-07 11:31:43.554645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.510 [2024-10-07 11:31:43.554678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.510 [2024-10-07 11:31:43.554711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.510 [2024-10-07 11:31:43.554730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.510 [2024-10-07 11:31:43.554744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.510 [2024-10-07 11:31:43.554775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.510 [2024-10-07 11:31:43.556728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.510 [2024-10-07 11:31:43.556837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.510 [2024-10-07 11:31:43.556868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.510 [2024-10-07 11:31:43.556890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.510 [2024-10-07 11:31:43.556922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.510 [2024-10-07 11:31:43.556955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.510 [2024-10-07 11:31:43.556972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.510 [2024-10-07 11:31:43.556987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.510 [2024-10-07 11:31:43.557016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.510 [2024-10-07 11:31:43.564553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.510 [2024-10-07 11:31:43.564665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.510 [2024-10-07 11:31:43.564696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.510 [2024-10-07 11:31:43.564713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.510 [2024-10-07 11:31:43.564745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.510 [2024-10-07 11:31:43.564777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.510 [2024-10-07 11:31:43.564796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.510 [2024-10-07 11:31:43.564810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.510 [2024-10-07 11:31:43.564841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.510 [2024-10-07 11:31:43.568616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.510 [2024-10-07 11:31:43.568727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.510 [2024-10-07 11:31:43.568758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.510 [2024-10-07 11:31:43.568776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.510 [2024-10-07 11:31:43.568827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.510 [2024-10-07 11:31:43.568861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.510 [2024-10-07 11:31:43.568879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.510 [2024-10-07 11:31:43.568893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.510 [2024-10-07 11:31:43.568928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.510 [2024-10-07 11:31:43.575349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.510 [2024-10-07 11:31:43.575470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.510 [2024-10-07 11:31:43.575502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.510 [2024-10-07 11:31:43.575520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.510 [2024-10-07 11:31:43.575553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.510 [2024-10-07 11:31:43.575586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.510 [2024-10-07 11:31:43.575605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.510 [2024-10-07 11:31:43.575619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.510 [2024-10-07 11:31:43.575649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.510 [2024-10-07 11:31:43.578709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.510 [2024-10-07 11:31:43.578822] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.510 [2024-10-07 11:31:43.578853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.510 [2024-10-07 11:31:43.578870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.510 [2024-10-07 11:31:43.578901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.510 [2024-10-07 11:31:43.578934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.510 [2024-10-07 11:31:43.578952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.510 [2024-10-07 11:31:43.578966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.510 [2024-10-07 11:31:43.578996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.510 [2024-10-07 11:31:43.585443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.510 [2024-10-07 11:31:43.585585] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.510 [2024-10-07 11:31:43.585622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.510 [2024-10-07 11:31:43.585640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.510 [2024-10-07 11:31:43.585672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.510 [2024-10-07 11:31:43.585704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.510 [2024-10-07 11:31:43.585722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.510 [2024-10-07 11:31:43.585754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.510 [2024-10-07 11:31:43.585957] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.510 [2024-10-07 11:31:43.589501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.510 [2024-10-07 11:31:43.589623] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.510 [2024-10-07 11:31:43.589655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.510 [2024-10-07 11:31:43.589672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.510 [2024-10-07 11:31:43.589705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.510 [2024-10-07 11:31:43.589738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.510 [2024-10-07 11:31:43.589756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.510 [2024-10-07 11:31:43.589770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.510 [2024-10-07 11:31:43.589801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.510 [2024-10-07 11:31:43.596714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.510 [2024-10-07 11:31:43.596827] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.510 [2024-10-07 11:31:43.596859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.510 [2024-10-07 11:31:43.596877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.510 [2024-10-07 11:31:43.596909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.510 [2024-10-07 11:31:43.596941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.510 [2024-10-07 11:31:43.596959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.510 [2024-10-07 11:31:43.596973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.510 [2024-10-07 11:31:43.597004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.510 [2024-10-07 11:31:43.600214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.510 [2024-10-07 11:31:43.600341] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.510 [2024-10-07 11:31:43.600373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.510 [2024-10-07 11:31:43.600391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.510 [2024-10-07 11:31:43.600575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.510 [2024-10-07 11:31:43.600647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.510 [2024-10-07 11:31:43.600671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.510 [2024-10-07 11:31:43.600685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.510 [2024-10-07 11:31:43.600716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.510 [2024-10-07 11:31:43.606802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.510 [2024-10-07 11:31:43.606937] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.510 [2024-10-07 11:31:43.606969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.510 [2024-10-07 11:31:43.606987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.510 [2024-10-07 11:31:43.607019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.510 [2024-10-07 11:31:43.607052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.510 [2024-10-07 11:31:43.607070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.510 [2024-10-07 11:31:43.607085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.510 [2024-10-07 11:31:43.607115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.510 [2024-10-07 11:31:43.610563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.510 [2024-10-07 11:31:43.610677] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.510 [2024-10-07 11:31:43.610708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.510 [2024-10-07 11:31:43.610726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.510 [2024-10-07 11:31:43.610758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.510 [2024-10-07 11:31:43.610790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.510 [2024-10-07 11:31:43.610808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.510 [2024-10-07 11:31:43.610822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.510 [2024-10-07 11:31:43.610852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.510 [2024-10-07 11:31:43.616906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.510 [2024-10-07 11:31:43.617018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.510 [2024-10-07 11:31:43.617049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.510 [2024-10-07 11:31:43.617066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.510 [2024-10-07 11:31:43.617786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.510 [2024-10-07 11:31:43.618403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.510 [2024-10-07 11:31:43.618443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.510 [2024-10-07 11:31:43.618460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.510 [2024-10-07 11:31:43.618694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.510 [2024-10-07 11:31:43.620659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.510 [2024-10-07 11:31:43.620770] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.510 [2024-10-07 11:31:43.620800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.510 [2024-10-07 11:31:43.620817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.510 [2024-10-07 11:31:43.620849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.510 [2024-10-07 11:31:43.620913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.510 [2024-10-07 11:31:43.620936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.510 [2024-10-07 11:31:43.620950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.510 [2024-10-07 11:31:43.620980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.510 [2024-10-07 11:31:43.626995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.510 [2024-10-07 11:31:43.627108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.510 [2024-10-07 11:31:43.627140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.510 [2024-10-07 11:31:43.627158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.510 [2024-10-07 11:31:43.627189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.510 [2024-10-07 11:31:43.627221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.510 [2024-10-07 11:31:43.627239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.510 [2024-10-07 11:31:43.627254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.510 [2024-10-07 11:31:43.627283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.510 [2024-10-07 11:31:43.632945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.510 [2024-10-07 11:31:43.633164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.510 [2024-10-07 11:31:43.633197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.510 [2024-10-07 11:31:43.633215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.510 [2024-10-07 11:31:43.633247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.510 [2024-10-07 11:31:43.633280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.510 [2024-10-07 11:31:43.633298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.510 [2024-10-07 11:31:43.633312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.510 [2024-10-07 11:31:43.633361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.511 [2024-10-07 11:31:43.637089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.511 [2024-10-07 11:31:43.637199] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.511 [2024-10-07 11:31:43.637230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.511 [2024-10-07 11:31:43.637248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.511 [2024-10-07 11:31:43.637279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.511 [2024-10-07 11:31:43.637312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.511 [2024-10-07 11:31:43.637350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.511 [2024-10-07 11:31:43.637365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.511 [2024-10-07 11:31:43.637412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.511 [2024-10-07 11:31:43.643039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.511 [2024-10-07 11:31:43.643153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.511 [2024-10-07 11:31:43.643184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.511 [2024-10-07 11:31:43.643202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.511 [2024-10-07 11:31:43.643234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.511 [2024-10-07 11:31:43.643266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.511 [2024-10-07 11:31:43.643284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.511 [2024-10-07 11:31:43.643298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.511 [2024-10-07 11:31:43.643342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.511 [2024-10-07 11:31:43.647178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.511 [2024-10-07 11:31:43.647290] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.511 [2024-10-07 11:31:43.647335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.511 [2024-10-07 11:31:43.647355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.511 [2024-10-07 11:31:43.647388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.511 [2024-10-07 11:31:43.647420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.511 [2024-10-07 11:31:43.647438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.511 [2024-10-07 11:31:43.647452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.511 [2024-10-07 11:31:43.647482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.511 [2024-10-07 11:31:43.654300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.511 [2024-10-07 11:31:43.654429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.511 [2024-10-07 11:31:43.654460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.511 [2024-10-07 11:31:43.654477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.511 [2024-10-07 11:31:43.654509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.511 [2024-10-07 11:31:43.654541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.511 [2024-10-07 11:31:43.654560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.511 [2024-10-07 11:31:43.654574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.511 [2024-10-07 11:31:43.654604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.511 [2024-10-07 11:31:43.657602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.511 [2024-10-07 11:31:43.657711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.511 [2024-10-07 11:31:43.657741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.511 [2024-10-07 11:31:43.657776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.511 [2024-10-07 11:31:43.657810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.511 [2024-10-07 11:31:43.657842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.511 [2024-10-07 11:31:43.657860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.511 [2024-10-07 11:31:43.657874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.511 [2024-10-07 11:31:43.657904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.511 [2024-10-07 11:31:43.666238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.511 [2024-10-07 11:31:43.666373] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.511 [2024-10-07 11:31:43.666406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.511 [2024-10-07 11:31:43.666424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.511 [2024-10-07 11:31:43.666457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.511 [2024-10-07 11:31:43.666489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.511 [2024-10-07 11:31:43.666507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.511 [2024-10-07 11:31:43.666521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.511 [2024-10-07 11:31:43.666551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.511 [2024-10-07 11:31:43.668470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.511 [2024-10-07 11:31:43.668579] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.511 [2024-10-07 11:31:43.668609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.511 [2024-10-07 11:31:43.668626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.511 [2024-10-07 11:31:43.668658] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.511 [2024-10-07 11:31:43.668690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.511 [2024-10-07 11:31:43.668708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.511 [2024-10-07 11:31:43.668722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.511 [2024-10-07 11:31:43.668752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.511 [2024-10-07 11:31:43.676350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.511 [2024-10-07 11:31:43.676461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.511 [2024-10-07 11:31:43.676493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.511 [2024-10-07 11:31:43.676510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.511 [2024-10-07 11:31:43.676542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.511 [2024-10-07 11:31:43.676574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.511 [2024-10-07 11:31:43.676608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.511 [2024-10-07 11:31:43.676624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.511 [2024-10-07 11:31:43.676655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.511 [2024-10-07 11:31:43.680303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.511 [2024-10-07 11:31:43.680429] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.511 [2024-10-07 11:31:43.680461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.511 [2024-10-07 11:31:43.680478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.511 [2024-10-07 11:31:43.680510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.511 [2024-10-07 11:31:43.680542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.511 [2024-10-07 11:31:43.680561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.511 [2024-10-07 11:31:43.680575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.511 [2024-10-07 11:31:43.680606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.511 [2024-10-07 11:31:43.686950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.511 [2024-10-07 11:31:43.687069] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.511 [2024-10-07 11:31:43.687100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.511 [2024-10-07 11:31:43.687117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.511 [2024-10-07 11:31:43.687165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.511 [2024-10-07 11:31:43.687201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.511 [2024-10-07 11:31:43.687219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.511 [2024-10-07 11:31:43.687233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.511 [2024-10-07 11:31:43.687262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.511 [2024-10-07 11:31:43.690405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.511 [2024-10-07 11:31:43.690516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.511 [2024-10-07 11:31:43.690547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.511 [2024-10-07 11:31:43.690564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.511 [2024-10-07 11:31:43.690595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.511 [2024-10-07 11:31:43.690627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.511 [2024-10-07 11:31:43.690645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.511 [2024-10-07 11:31:43.690659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.511 [2024-10-07 11:31:43.690689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.511 [2024-10-07 11:31:43.697042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.511 [2024-10-07 11:31:43.697157] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.511 [2024-10-07 11:31:43.697189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.511 [2024-10-07 11:31:43.697206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.511 [2024-10-07 11:31:43.697238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.511 [2024-10-07 11:31:43.697270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.511 [2024-10-07 11:31:43.697288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.511 [2024-10-07 11:31:43.697302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.511 [2024-10-07 11:31:43.697522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.511 [2024-10-07 11:31:43.701062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.511 [2024-10-07 11:31:43.701184] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.511 [2024-10-07 11:31:43.701215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.511 [2024-10-07 11:31:43.701232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.511 [2024-10-07 11:31:43.701264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.511 [2024-10-07 11:31:43.701296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.511 [2024-10-07 11:31:43.701327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.511 [2024-10-07 11:31:43.701345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.511 [2024-10-07 11:31:43.701378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.511 [2024-10-07 11:31:43.708407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.511 [2024-10-07 11:31:43.708557] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.511 [2024-10-07 11:31:43.708589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.511 [2024-10-07 11:31:43.708607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.511 [2024-10-07 11:31:43.708639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.511 [2024-10-07 11:31:43.708672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.511 [2024-10-07 11:31:43.708690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.511 [2024-10-07 11:31:43.708703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.511 [2024-10-07 11:31:43.708734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.511 [2024-10-07 11:31:43.711291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.511 [2024-10-07 11:31:43.711416] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.511 [2024-10-07 11:31:43.711448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.511 [2024-10-07 11:31:43.711465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.511 [2024-10-07 11:31:43.711515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.511 [2024-10-07 11:31:43.711548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.511 [2024-10-07 11:31:43.711566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.511 [2024-10-07 11:31:43.711580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.511 [2024-10-07 11:31:43.711610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.511 [2024-10-07 11:31:43.718692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.511 [2024-10-07 11:31:43.718807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.511 [2024-10-07 11:31:43.718838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.511 [2024-10-07 11:31:43.718855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.511 [2024-10-07 11:31:43.718888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.511 [2024-10-07 11:31:43.718920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.511 [2024-10-07 11:31:43.718938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.511 [2024-10-07 11:31:43.718953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.511 [2024-10-07 11:31:43.719858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.511 [2024-10-07 11:31:43.722868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.511 [2024-10-07 11:31:43.724156] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.511 [2024-10-07 11:31:43.724200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.511 [2024-10-07 11:31:43.724220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.511 [2024-10-07 11:31:43.724371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.511 [2024-10-07 11:31:43.724413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.511 [2024-10-07 11:31:43.724432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.511 [2024-10-07 11:31:43.724446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.511 [2024-10-07 11:31:43.724477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.511 [2024-10-07 11:31:43.729553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.512 [2024-10-07 11:31:43.729668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.512 [2024-10-07 11:31:43.729700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.512 [2024-10-07 11:31:43.729717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.512 [2024-10-07 11:31:43.729749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.512 [2024-10-07 11:31:43.729781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.512 [2024-10-07 11:31:43.729799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.512 [2024-10-07 11:31:43.729830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.512 [2024-10-07 11:31:43.729863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.512 [2024-10-07 11:31:43.732957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.512 [2024-10-07 11:31:43.733068] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.512 [2024-10-07 11:31:43.733099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.512 [2024-10-07 11:31:43.733117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.512 [2024-10-07 11:31:43.734020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.512 [2024-10-07 11:31:43.734228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.512 [2024-10-07 11:31:43.734264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.512 [2024-10-07 11:31:43.734282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.512 [2024-10-07 11:31:43.734389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.512 [2024-10-07 11:31:43.740459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.512 [2024-10-07 11:31:43.740574] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.512 [2024-10-07 11:31:43.740606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.512 [2024-10-07 11:31:43.740623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.512 [2024-10-07 11:31:43.740656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.512 [2024-10-07 11:31:43.740688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.512 [2024-10-07 11:31:43.740706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.512 [2024-10-07 11:31:43.740720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.512 [2024-10-07 11:31:43.740750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.512 [2024-10-07 11:31:43.743682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.512 [2024-10-07 11:31:43.743796] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.512 [2024-10-07 11:31:43.743827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.512 [2024-10-07 11:31:43.743845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.512 [2024-10-07 11:31:43.743877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.512 [2024-10-07 11:31:43.743910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.512 [2024-10-07 11:31:43.743928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.512 [2024-10-07 11:31:43.743942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.512 [2024-10-07 11:31:43.743972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.512 [2024-10-07 11:31:43.752283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.512 [2024-10-07 11:31:43.752488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.512 [2024-10-07 11:31:43.752521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.512 [2024-10-07 11:31:43.752539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.512 [2024-10-07 11:31:43.752572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.512 [2024-10-07 11:31:43.752605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.512 [2024-10-07 11:31:43.752624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.512 [2024-10-07 11:31:43.752638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.512 [2024-10-07 11:31:43.752669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.512 [2024-10-07 11:31:43.754580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.512 [2024-10-07 11:31:43.754691] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.512 [2024-10-07 11:31:43.754722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.512 [2024-10-07 11:31:43.754740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.512 [2024-10-07 11:31:43.754772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.512 [2024-10-07 11:31:43.754803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.512 [2024-10-07 11:31:43.754822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.512 [2024-10-07 11:31:43.754837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.512 [2024-10-07 11:31:43.754866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.512 [2024-10-07 11:31:43.762865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.512 [2024-10-07 11:31:43.762988] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.512 [2024-10-07 11:31:43.763020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.512 [2024-10-07 11:31:43.763038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.512 [2024-10-07 11:31:43.764226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.512 [2024-10-07 11:31:43.764482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.512 [2024-10-07 11:31:43.764519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.512 [2024-10-07 11:31:43.764537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.512 [2024-10-07 11:31:43.765357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.512 [2024-10-07 11:31:43.766736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.512 [2024-10-07 11:31:43.766847] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.512 [2024-10-07 11:31:43.766878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.512 [2024-10-07 11:31:43.766896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.512 [2024-10-07 11:31:43.766928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.512 [2024-10-07 11:31:43.766979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.512 [2024-10-07 11:31:43.766998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.512 [2024-10-07 11:31:43.767012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.512 [2024-10-07 11:31:43.767043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.512 [2024-10-07 11:31:43.773048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.512 [2024-10-07 11:31:43.773162] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.512 [2024-10-07 11:31:43.773194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.512 [2024-10-07 11:31:43.773212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.512 [2024-10-07 11:31:43.773244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.512 [2024-10-07 11:31:43.773277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.512 [2024-10-07 11:31:43.773295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.512 [2024-10-07 11:31:43.773309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.512 [2024-10-07 11:31:43.773355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.512 [2024-10-07 11:31:43.776828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.512 [2024-10-07 11:31:43.776939] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.512 [2024-10-07 11:31:43.776970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.512 [2024-10-07 11:31:43.776988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.512 [2024-10-07 11:31:43.778173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.512 [2024-10-07 11:31:43.778447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.512 [2024-10-07 11:31:43.778485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.512 [2024-10-07 11:31:43.778503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.512 [2024-10-07 11:31:43.779306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.512 [2024-10-07 11:31:43.783139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.512 [2024-10-07 11:31:43.783249] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.512 [2024-10-07 11:31:43.783281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.512 [2024-10-07 11:31:43.783298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.512 [2024-10-07 11:31:43.783345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.512 [2024-10-07 11:31:43.783381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.512 [2024-10-07 11:31:43.783399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.512 [2024-10-07 11:31:43.783413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.512 [2024-10-07 11:31:43.783616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.512 [2024-10-07 11:31:43.787133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.512 [2024-10-07 11:31:43.787247] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.512 [2024-10-07 11:31:43.787278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.512 [2024-10-07 11:31:43.787296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.512 [2024-10-07 11:31:43.787342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.512 [2024-10-07 11:31:43.787378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.512 [2024-10-07 11:31:43.787396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.512 [2024-10-07 11:31:43.787411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.512 [2024-10-07 11:31:43.787440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.512 [2024-10-07 11:31:43.794537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.512 [2024-10-07 11:31:43.794657] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.512 [2024-10-07 11:31:43.794689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.512 [2024-10-07 11:31:43.794707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.512 [2024-10-07 11:31:43.794739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.512 [2024-10-07 11:31:43.794772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.512 [2024-10-07 11:31:43.794790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.512 [2024-10-07 11:31:43.794805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.512 [2024-10-07 11:31:43.794834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.512 [2024-10-07 11:31:43.797226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.512 [2024-10-07 11:31:43.797349] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.512 [2024-10-07 11:31:43.797380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.512 [2024-10-07 11:31:43.797398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.512 [2024-10-07 11:31:43.797431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.512 [2024-10-07 11:31:43.797463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.512 [2024-10-07 11:31:43.797482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.512 [2024-10-07 11:31:43.797496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.512 [2024-10-07 11:31:43.797526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.512 [2024-10-07 11:31:43.804634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.512 [2024-10-07 11:31:43.804758] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.512 [2024-10-07 11:31:43.804790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.512 [2024-10-07 11:31:43.804830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.512 [2024-10-07 11:31:43.804865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.512 [2024-10-07 11:31:43.804898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.512 [2024-10-07 11:31:43.804917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.512 [2024-10-07 11:31:43.804932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.512 [2024-10-07 11:31:43.804962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.512 [2024-10-07 11:31:43.808711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.512 [2024-10-07 11:31:43.808826] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.512 [2024-10-07 11:31:43.808857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.512 [2024-10-07 11:31:43.808875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.512 [2024-10-07 11:31:43.808907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.512 [2024-10-07 11:31:43.808940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.513 [2024-10-07 11:31:43.808958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.513 [2024-10-07 11:31:43.808973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.513 [2024-10-07 11:31:43.809003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.513 [2024-10-07 11:31:43.815390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.513 [2024-10-07 11:31:43.815516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.513 [2024-10-07 11:31:43.815548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.513 [2024-10-07 11:31:43.815566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.513 [2024-10-07 11:31:43.815599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.513 [2024-10-07 11:31:43.815631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.513 [2024-10-07 11:31:43.815649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.513 [2024-10-07 11:31:43.815663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.513 [2024-10-07 11:31:43.815693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.513 [2024-10-07 11:31:43.818804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.513 [2024-10-07 11:31:43.818917] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.513 [2024-10-07 11:31:43.818948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.513 [2024-10-07 11:31:43.818966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.513 [2024-10-07 11:31:43.818998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.513 [2024-10-07 11:31:43.819030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.513 [2024-10-07 11:31:43.819070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.513 [2024-10-07 11:31:43.819086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.513 [2024-10-07 11:31:43.819118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.513 [2024-10-07 11:31:43.825502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.513 [2024-10-07 11:31:43.825618] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.513 [2024-10-07 11:31:43.825650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.513 [2024-10-07 11:31:43.825667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.513 [2024-10-07 11:31:43.825700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.513 [2024-10-07 11:31:43.825892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.513 [2024-10-07 11:31:43.825919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.513 [2024-10-07 11:31:43.825934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.513 [2024-10-07 11:31:43.826066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.513 [2024-10-07 11:31:43.829456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.513 [2024-10-07 11:31:43.829575] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.513 [2024-10-07 11:31:43.829607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.513 [2024-10-07 11:31:43.829624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.513 [2024-10-07 11:31:43.829672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.513 [2024-10-07 11:31:43.829709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.513 [2024-10-07 11:31:43.829737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.513 [2024-10-07 11:31:43.829752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.513 [2024-10-07 11:31:43.829784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.513 [2024-10-07 11:31:43.836869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.513 [2024-10-07 11:31:43.836986] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.513 [2024-10-07 11:31:43.837017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.513 [2024-10-07 11:31:43.837035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.513 [2024-10-07 11:31:43.837067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.513 [2024-10-07 11:31:43.837099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.513 [2024-10-07 11:31:43.837117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.513 [2024-10-07 11:31:43.837132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.513 [2024-10-07 11:31:43.837162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.513 [2024-10-07 11:31:43.839549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.513 [2024-10-07 11:31:43.839662] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.513 [2024-10-07 11:31:43.839693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.513 [2024-10-07 11:31:43.839711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.513 [2024-10-07 11:31:43.839742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.513 [2024-10-07 11:31:43.839776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.513 [2024-10-07 11:31:43.839804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.513 [2024-10-07 11:31:43.839820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.513 [2024-10-07 11:31:43.840007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.513 [2024-10-07 11:31:43.846962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.513 [2024-10-07 11:31:43.847077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.513 [2024-10-07 11:31:43.847109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.513 [2024-10-07 11:31:43.847126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.513 [2024-10-07 11:31:43.847158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.513 [2024-10-07 11:31:43.847191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.513 [2024-10-07 11:31:43.847208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.513 [2024-10-07 11:31:43.847223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.513 [2024-10-07 11:31:43.847253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.513 [2024-10-07 11:31:43.850947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.513 [2024-10-07 11:31:43.851068] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.513 [2024-10-07 11:31:43.851100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.513 [2024-10-07 11:31:43.851118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.513 [2024-10-07 11:31:43.851151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.513 [2024-10-07 11:31:43.851183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.513 [2024-10-07 11:31:43.851202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.513 [2024-10-07 11:31:43.851217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.513 [2024-10-07 11:31:43.851248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.513 [2024-10-07 11:31:43.857561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.513 [2024-10-07 11:31:43.857685] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.513 [2024-10-07 11:31:43.857716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.513 [2024-10-07 11:31:43.857734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.513 [2024-10-07 11:31:43.857786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.513 [2024-10-07 11:31:43.857819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.513 [2024-10-07 11:31:43.857837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.513 [2024-10-07 11:31:43.857851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.513 [2024-10-07 11:31:43.857882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.513 [2024-10-07 11:31:43.861044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.513 [2024-10-07 11:31:43.861157] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.513 [2024-10-07 11:31:43.861189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.513 [2024-10-07 11:31:43.861206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.513 [2024-10-07 11:31:43.861238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.513 [2024-10-07 11:31:43.861270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.513 [2024-10-07 11:31:43.861288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.513 [2024-10-07 11:31:43.861303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.513 [2024-10-07 11:31:43.861348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.513 [2024-10-07 11:31:43.867654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.513 [2024-10-07 11:31:43.867771] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.513 [2024-10-07 11:31:43.867803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.513 [2024-10-07 11:31:43.867821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.513 [2024-10-07 11:31:43.867853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.513 [2024-10-07 11:31:43.867885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.513 [2024-10-07 11:31:43.867903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.513 [2024-10-07 11:31:43.867918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.513 [2024-10-07 11:31:43.867948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.513 [2024-10-07 11:31:43.871671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.513 [2024-10-07 11:31:43.871795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.513 [2024-10-07 11:31:43.871827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.513 [2024-10-07 11:31:43.871845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.513 [2024-10-07 11:31:43.871878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.513 [2024-10-07 11:31:43.871910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.513 [2024-10-07 11:31:43.871929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.513 [2024-10-07 11:31:43.871970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.513 [2024-10-07 11:31:43.872003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.513 [2024-10-07 11:31:43.879144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.513 [2024-10-07 11:31:43.879263] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.513 [2024-10-07 11:31:43.879295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.513 [2024-10-07 11:31:43.879312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.513 [2024-10-07 11:31:43.879361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.513 [2024-10-07 11:31:43.879394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.513 [2024-10-07 11:31:43.879412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.513 [2024-10-07 11:31:43.879426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.513 [2024-10-07 11:31:43.879456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.513 [2024-10-07 11:31:43.881761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.513 [2024-10-07 11:31:43.881874] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.513 [2024-10-07 11:31:43.881905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.513 [2024-10-07 11:31:43.881923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.513 [2024-10-07 11:31:43.881970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.513 [2024-10-07 11:31:43.882005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.513 [2024-10-07 11:31:43.882025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.513 [2024-10-07 11:31:43.882040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.513 [2024-10-07 11:31:43.882224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.513 [2024-10-07 11:31:43.889242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.513 [2024-10-07 11:31:43.889369] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.513 [2024-10-07 11:31:43.889401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.513 [2024-10-07 11:31:43.889419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.513 [2024-10-07 11:31:43.889452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.513 [2024-10-07 11:31:43.889485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.513 [2024-10-07 11:31:43.889503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.513 [2024-10-07 11:31:43.889517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.513 [2024-10-07 11:31:43.889547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.513 [2024-10-07 11:31:43.893198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.513 [2024-10-07 11:31:43.893352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.513 [2024-10-07 11:31:43.893385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.513 [2024-10-07 11:31:43.893403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.513 [2024-10-07 11:31:43.893437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.513 [2024-10-07 11:31:43.893470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.513 [2024-10-07 11:31:43.893488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.513 [2024-10-07 11:31:43.893503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.513 [2024-10-07 11:31:43.893533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.513 [2024-10-07 11:31:43.899860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.513 [2024-10-07 11:31:43.899987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.513 [2024-10-07 11:31:43.900019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.513 [2024-10-07 11:31:43.900036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.513 [2024-10-07 11:31:43.900070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.513 [2024-10-07 11:31:43.900103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.513 [2024-10-07 11:31:43.900121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.513 [2024-10-07 11:31:43.900136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.513 [2024-10-07 11:31:43.900166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.513 [2024-10-07 11:31:43.903326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.513 [2024-10-07 11:31:43.903446] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.513 [2024-10-07 11:31:43.903477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.514 [2024-10-07 11:31:43.903495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.514 [2024-10-07 11:31:43.903528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.514 [2024-10-07 11:31:43.903560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.514 [2024-10-07 11:31:43.903579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.514 [2024-10-07 11:31:43.903593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.514 [2024-10-07 11:31:43.903624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.514 [2024-10-07 11:31:43.909955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.514 [2024-10-07 11:31:43.910071] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.514 [2024-10-07 11:31:43.910111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.514 [2024-10-07 11:31:43.910129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.514 [2024-10-07 11:31:43.910162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.514 [2024-10-07 11:31:43.910218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.514 [2024-10-07 11:31:43.910239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.514 [2024-10-07 11:31:43.910253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.514 [2024-10-07 11:31:43.910467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.514 [2024-10-07 11:31:43.913986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.514 [2024-10-07 11:31:43.914115] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.514 [2024-10-07 11:31:43.914148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.514 [2024-10-07 11:31:43.914165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.514 [2024-10-07 11:31:43.914203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.514 [2024-10-07 11:31:43.914236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.514 [2024-10-07 11:31:43.914255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.514 [2024-10-07 11:31:43.914270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.514 [2024-10-07 11:31:43.914314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.514 [2024-10-07 11:31:43.921447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.514 [2024-10-07 11:31:43.921567] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.514 [2024-10-07 11:31:43.921599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.514 [2024-10-07 11:31:43.921617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.514 [2024-10-07 11:31:43.921650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.514 [2024-10-07 11:31:43.921683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.514 [2024-10-07 11:31:43.921701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.514 [2024-10-07 11:31:43.921716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.514 [2024-10-07 11:31:43.921746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.514 [2024-10-07 11:31:43.924085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.514 [2024-10-07 11:31:43.924198] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.514 [2024-10-07 11:31:43.924229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.514 [2024-10-07 11:31:43.924247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.514 [2024-10-07 11:31:43.924278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.514 [2024-10-07 11:31:43.924311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.514 [2024-10-07 11:31:43.924345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.514 [2024-10-07 11:31:43.924360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.514 [2024-10-07 11:31:43.924569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.514 [2024-10-07 11:31:43.931544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.514 [2024-10-07 11:31:43.931660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.514 [2024-10-07 11:31:43.931692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.514 [2024-10-07 11:31:43.931709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.514 [2024-10-07 11:31:43.931742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.514 [2024-10-07 11:31:43.931774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.514 [2024-10-07 11:31:43.931792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.514 [2024-10-07 11:31:43.931807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.514 [2024-10-07 11:31:43.931837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.514 [2024-10-07 11:31:43.935539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.514 [2024-10-07 11:31:43.935654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.514 [2024-10-07 11:31:43.935685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.514 [2024-10-07 11:31:43.935703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.514 [2024-10-07 11:31:43.935736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.514 [2024-10-07 11:31:43.935769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.514 [2024-10-07 11:31:43.935787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.514 [2024-10-07 11:31:43.935802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.514 [2024-10-07 11:31:43.935831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.514 [2024-10-07 11:31:43.942212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.514 [2024-10-07 11:31:43.942380] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.514 [2024-10-07 11:31:43.942423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.514 [2024-10-07 11:31:43.942443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.514 [2024-10-07 11:31:43.942478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.514 [2024-10-07 11:31:43.942511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.514 [2024-10-07 11:31:43.942529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.514 [2024-10-07 11:31:43.942544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.514 [2024-10-07 11:31:43.942575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.514 [2024-10-07 11:31:43.945628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.514 [2024-10-07 11:31:43.945739] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.514 [2024-10-07 11:31:43.945775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.514 [2024-10-07 11:31:43.945820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.514 [2024-10-07 11:31:43.945854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.514 [2024-10-07 11:31:43.945888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.514 [2024-10-07 11:31:43.945907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.514 [2024-10-07 11:31:43.945921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.514 [2024-10-07 11:31:43.945951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.514 [2024-10-07 11:31:43.952303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.514 [2024-10-07 11:31:43.952433] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.514 [2024-10-07 11:31:43.952469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.514 [2024-10-07 11:31:43.952488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.514 [2024-10-07 11:31:43.952520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.514 [2024-10-07 11:31:43.952552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.514 [2024-10-07 11:31:43.952570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.514 [2024-10-07 11:31:43.952584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.514 [2024-10-07 11:31:43.952614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.514 [2024-10-07 11:31:43.956363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.514 [2024-10-07 11:31:43.956487] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.514 [2024-10-07 11:31:43.956520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.514 [2024-10-07 11:31:43.956538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.514 [2024-10-07 11:31:43.956571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.514 [2024-10-07 11:31:43.956604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.514 [2024-10-07 11:31:43.956623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.514 [2024-10-07 11:31:43.956637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.514 [2024-10-07 11:31:43.956667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.514 [2024-10-07 11:31:43.963752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.514 [2024-10-07 11:31:43.963870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.514 [2024-10-07 11:31:43.963903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.514 [2024-10-07 11:31:43.963921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.514 [2024-10-07 11:31:43.963953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.514 [2024-10-07 11:31:43.963986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.514 [2024-10-07 11:31:43.964028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.514 [2024-10-07 11:31:43.964043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.514 [2024-10-07 11:31:43.964075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.514 [2024-10-07 11:31:43.966455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.514 [2024-10-07 11:31:43.966566] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.514 [2024-10-07 11:31:43.966603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.514 [2024-10-07 11:31:43.966622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.514 [2024-10-07 11:31:43.966653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.514 [2024-10-07 11:31:43.966686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.514 [2024-10-07 11:31:43.966704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.514 [2024-10-07 11:31:43.966718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.514 [2024-10-07 11:31:43.966903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.514 [2024-10-07 11:31:43.973845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.514 [2024-10-07 11:31:43.973959] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.514 [2024-10-07 11:31:43.973990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.514 [2024-10-07 11:31:43.974008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.514 [2024-10-07 11:31:43.974040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.514 [2024-10-07 11:31:43.974072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.514 [2024-10-07 11:31:43.974090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.514 [2024-10-07 11:31:43.974104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.514 [2024-10-07 11:31:43.974134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.514 [2024-10-07 11:31:43.977816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.514 [2024-10-07 11:31:43.977927] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.514 [2024-10-07 11:31:43.977958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.514 [2024-10-07 11:31:43.977976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.514 [2024-10-07 11:31:43.978008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.514 [2024-10-07 11:31:43.978041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.514 [2024-10-07 11:31:43.978059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.514 [2024-10-07 11:31:43.978074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.514 [2024-10-07 11:31:43.978103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.514 [2024-10-07 11:31:43.984447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.514 [2024-10-07 11:31:43.984577] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.514 [2024-10-07 11:31:43.984609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.514 [2024-10-07 11:31:43.984627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.514 [2024-10-07 11:31:43.984659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.514 [2024-10-07 11:31:43.984692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.514 [2024-10-07 11:31:43.984709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.514 [2024-10-07 11:31:43.984723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.514 [2024-10-07 11:31:43.984753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.514 [2024-10-07 11:31:43.987906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.514 [2024-10-07 11:31:43.988017] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.514 [2024-10-07 11:31:43.988048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.514 [2024-10-07 11:31:43.988066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.514 [2024-10-07 11:31:43.988097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.514 [2024-10-07 11:31:43.988130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.514 [2024-10-07 11:31:43.988148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.514 [2024-10-07 11:31:43.988162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.514 [2024-10-07 11:31:43.988192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.514 [2024-10-07 11:31:43.994546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.514 [2024-10-07 11:31:43.994672] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.515 [2024-10-07 11:31:43.994703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.515 [2024-10-07 11:31:43.994721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.515 [2024-10-07 11:31:43.994753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.515 [2024-10-07 11:31:43.994785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.515 [2024-10-07 11:31:43.994803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.515 [2024-10-07 11:31:43.994819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.515 [2024-10-07 11:31:43.995012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.515 [2024-10-07 11:31:43.998526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.515 [2024-10-07 11:31:43.998653] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.515 [2024-10-07 11:31:43.998685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.515 [2024-10-07 11:31:43.998703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.515 [2024-10-07 11:31:43.998766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.515 [2024-10-07 11:31:43.998802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.515 [2024-10-07 11:31:43.998829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.515 [2024-10-07 11:31:43.998843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.515 [2024-10-07 11:31:43.998874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.515 [2024-10-07 11:31:44.005951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.515 [2024-10-07 11:31:44.006072] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.515 [2024-10-07 11:31:44.006105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.515 [2024-10-07 11:31:44.006122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.515 [2024-10-07 11:31:44.006154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.515 [2024-10-07 11:31:44.006187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.515 [2024-10-07 11:31:44.006205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.515 [2024-10-07 11:31:44.006229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.515 [2024-10-07 11:31:44.006259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.515 [2024-10-07 11:31:44.008623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.515 [2024-10-07 11:31:44.008733] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.515 [2024-10-07 11:31:44.008764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.515 [2024-10-07 11:31:44.008781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.515 [2024-10-07 11:31:44.008812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.515 [2024-10-07 11:31:44.008844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.515 [2024-10-07 11:31:44.008863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.515 [2024-10-07 11:31:44.008877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.515 [2024-10-07 11:31:44.009061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.515 [2024-10-07 11:31:44.016047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.515 [2024-10-07 11:31:44.016161] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.515 [2024-10-07 11:31:44.016192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.515 [2024-10-07 11:31:44.016210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.515 [2024-10-07 11:31:44.016241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.515 [2024-10-07 11:31:44.016273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.515 [2024-10-07 11:31:44.016291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.515 [2024-10-07 11:31:44.016343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.515 [2024-10-07 11:31:44.016377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.515 [2024-10-07 11:31:44.019999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.515 [2024-10-07 11:31:44.020114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.515 [2024-10-07 11:31:44.020146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.515 [2024-10-07 11:31:44.020164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.515 [2024-10-07 11:31:44.020196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.515 [2024-10-07 11:31:44.020227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.515 [2024-10-07 11:31:44.020245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.515 [2024-10-07 11:31:44.020260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.515 [2024-10-07 11:31:44.020290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.515 [2024-10-07 11:31:44.026685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.515 [2024-10-07 11:31:44.026807] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.515 [2024-10-07 11:31:44.026838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.515 [2024-10-07 11:31:44.026855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.515 [2024-10-07 11:31:44.026887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.515 [2024-10-07 11:31:44.026920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.515 [2024-10-07 11:31:44.026938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.515 [2024-10-07 11:31:44.026952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.515 [2024-10-07 11:31:44.026982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.515 [2024-10-07 11:31:44.030091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.515 [2024-10-07 11:31:44.030199] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.515 [2024-10-07 11:31:44.030230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.515 [2024-10-07 11:31:44.030246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.515 [2024-10-07 11:31:44.030278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.515 [2024-10-07 11:31:44.030340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.515 [2024-10-07 11:31:44.030362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.515 [2024-10-07 11:31:44.030377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.515 [2024-10-07 11:31:44.030407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.515 [2024-10-07 11:31:44.036781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.515 [2024-10-07 11:31:44.036911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.515 [2024-10-07 11:31:44.036943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.515 [2024-10-07 11:31:44.036961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.515 [2024-10-07 11:31:44.036993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.515 [2024-10-07 11:31:44.037025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.515 [2024-10-07 11:31:44.037043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.515 [2024-10-07 11:31:44.037057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.515 [2024-10-07 11:31:44.037245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.515 [2024-10-07 11:31:44.040729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.515 [2024-10-07 11:31:44.040849] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.515 [2024-10-07 11:31:44.040881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.515 [2024-10-07 11:31:44.040898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.515 [2024-10-07 11:31:44.040930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.515 [2024-10-07 11:31:44.040962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.515 [2024-10-07 11:31:44.040980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.515 [2024-10-07 11:31:44.040994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.515 [2024-10-07 11:31:44.041024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.515 [2024-10-07 11:31:44.048122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.515 [2024-10-07 11:31:44.048237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.515 [2024-10-07 11:31:44.048268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.515 [2024-10-07 11:31:44.048286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.515 [2024-10-07 11:31:44.048332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.515 [2024-10-07 11:31:44.048367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.515 [2024-10-07 11:31:44.048386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.515 [2024-10-07 11:31:44.048400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.515 [2024-10-07 11:31:44.048430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.515 [2024-10-07 11:31:44.050837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.515 [2024-10-07 11:31:44.050946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.515 [2024-10-07 11:31:44.050977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.515 [2024-10-07 11:31:44.050994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.515 [2024-10-07 11:31:44.051026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.515 [2024-10-07 11:31:44.051228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.515 [2024-10-07 11:31:44.051255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.515 [2024-10-07 11:31:44.051270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.515 [2024-10-07 11:31:44.051427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.515 [2024-10-07 11:31:44.058214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.515 [2024-10-07 11:31:44.058352] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.515 [2024-10-07 11:31:44.058385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.515 [2024-10-07 11:31:44.058402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.515 [2024-10-07 11:31:44.058436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.515 [2024-10-07 11:31:44.058469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.515 [2024-10-07 11:31:44.058487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.515 [2024-10-07 11:31:44.058502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.515 [2024-10-07 11:31:44.058531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.515 [2024-10-07 11:31:44.062072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.515 [2024-10-07 11:31:44.062193] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.515 [2024-10-07 11:31:44.062224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.515 [2024-10-07 11:31:44.062242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.515 [2024-10-07 11:31:44.062273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.515 [2024-10-07 11:31:44.062334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.515 [2024-10-07 11:31:44.062356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.515 [2024-10-07 11:31:44.062371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.515 [2024-10-07 11:31:44.062401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.515 [2024-10-07 11:31:44.068652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.515 [2024-10-07 11:31:44.068772] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.515 [2024-10-07 11:31:44.068804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.515 [2024-10-07 11:31:44.068821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.515 [2024-10-07 11:31:44.068853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.515 [2024-10-07 11:31:44.068889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.515 [2024-10-07 11:31:44.068907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.515 [2024-10-07 11:31:44.068921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.515 [2024-10-07 11:31:44.068969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.515 [2024-10-07 11:31:44.072159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.515 [2024-10-07 11:31:44.072272] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.515 [2024-10-07 11:31:44.072303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.515 [2024-10-07 11:31:44.072343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.515 [2024-10-07 11:31:44.072377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.515 [2024-10-07 11:31:44.072410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.515 [2024-10-07 11:31:44.072427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.515 [2024-10-07 11:31:44.072442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.515 [2024-10-07 11:31:44.072489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.515 [2024-10-07 11:31:44.078744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.515 [2024-10-07 11:31:44.078857] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.515 [2024-10-07 11:31:44.078888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.515 [2024-10-07 11:31:44.078906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.515 [2024-10-07 11:31:44.079092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.515 [2024-10-07 11:31:44.079234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.515 [2024-10-07 11:31:44.079269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.515 [2024-10-07 11:31:44.079287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.515 [2024-10-07 11:31:44.079418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.515 [2024-10-07 11:31:44.082520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.515 [2024-10-07 11:31:44.082640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.515 [2024-10-07 11:31:44.082671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.515 [2024-10-07 11:31:44.082688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.515 [2024-10-07 11:31:44.082720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.515 [2024-10-07 11:31:44.082752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.515 [2024-10-07 11:31:44.082770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.515 [2024-10-07 11:31:44.082785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.515 [2024-10-07 11:31:44.082814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.515 [2024-10-07 11:31:44.089846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.089959] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.516 [2024-10-07 11:31:44.089990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.516 [2024-10-07 11:31:44.090026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.516 [2024-10-07 11:31:44.090060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.516 [2024-10-07 11:31:44.090092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.516 [2024-10-07 11:31:44.090111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.516 [2024-10-07 11:31:44.090125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.516 [2024-10-07 11:31:44.090155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.516 [2024-10-07 11:31:44.092609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.092718] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.516 [2024-10-07 11:31:44.092748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.516 [2024-10-07 11:31:44.092766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.516 [2024-10-07 11:31:44.092951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.516 [2024-10-07 11:31:44.093093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.516 [2024-10-07 11:31:44.093124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.516 [2024-10-07 11:31:44.093141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.516 [2024-10-07 11:31:44.093257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.516 [2024-10-07 11:31:44.099941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.100053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.516 [2024-10-07 11:31:44.100084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.516 [2024-10-07 11:31:44.100102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.516 [2024-10-07 11:31:44.100133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.516 [2024-10-07 11:31:44.100165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.516 [2024-10-07 11:31:44.100183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.516 [2024-10-07 11:31:44.100197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.516 [2024-10-07 11:31:44.100226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.516 [2024-10-07 11:31:44.103751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.103863] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.516 [2024-10-07 11:31:44.103895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.516 [2024-10-07 11:31:44.103913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.516 [2024-10-07 11:31:44.103944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.516 [2024-10-07 11:31:44.103976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.516 [2024-10-07 11:31:44.104013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.516 [2024-10-07 11:31:44.104028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.516 [2024-10-07 11:31:44.104059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.516 [2024-10-07 11:31:44.110260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.110407] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.516 [2024-10-07 11:31:44.110440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.516 [2024-10-07 11:31:44.110458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.516 [2024-10-07 11:31:44.110490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.516 [2024-10-07 11:31:44.110540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.516 [2024-10-07 11:31:44.110561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.516 [2024-10-07 11:31:44.110576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.516 [2024-10-07 11:31:44.110606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.516 [2024-10-07 11:31:44.113841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.113952] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.516 [2024-10-07 11:31:44.113982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.516 [2024-10-07 11:31:44.114000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.516 [2024-10-07 11:31:44.114032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.516 [2024-10-07 11:31:44.114064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.516 [2024-10-07 11:31:44.114083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.516 [2024-10-07 11:31:44.114097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.516 [2024-10-07 11:31:44.114126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.516 [2024-10-07 11:31:44.120378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.120490] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.516 [2024-10-07 11:31:44.120521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.516 [2024-10-07 11:31:44.120539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.516 [2024-10-07 11:31:44.120725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.516 [2024-10-07 11:31:44.120865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.516 [2024-10-07 11:31:44.120901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.516 [2024-10-07 11:31:44.120918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.516 [2024-10-07 11:31:44.121037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.516 [2024-10-07 11:31:44.124180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.124301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.516 [2024-10-07 11:31:44.124348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.516 [2024-10-07 11:31:44.124367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.516 [2024-10-07 11:31:44.124400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.516 [2024-10-07 11:31:44.124433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.516 [2024-10-07 11:31:44.124452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.516 [2024-10-07 11:31:44.124466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.516 [2024-10-07 11:31:44.124496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.516 [2024-10-07 11:31:44.131502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.131621] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.516 [2024-10-07 11:31:44.131652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.516 [2024-10-07 11:31:44.131670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.516 [2024-10-07 11:31:44.131702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.516 [2024-10-07 11:31:44.131734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.516 [2024-10-07 11:31:44.131753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.516 [2024-10-07 11:31:44.131767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.516 [2024-10-07 11:31:44.131796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.516 [2024-10-07 11:31:44.134273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.134401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.516 [2024-10-07 11:31:44.134432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.516 [2024-10-07 11:31:44.134450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.516 [2024-10-07 11:31:44.134636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.516 [2024-10-07 11:31:44.134777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.516 [2024-10-07 11:31:44.134813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.516 [2024-10-07 11:31:44.134830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.516 [2024-10-07 11:31:44.134948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.516 [2024-10-07 11:31:44.141599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.141711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.516 [2024-10-07 11:31:44.141743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.516 [2024-10-07 11:31:44.141760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.516 [2024-10-07 11:31:44.141807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.516 [2024-10-07 11:31:44.141840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.516 [2024-10-07 11:31:44.141858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.516 [2024-10-07 11:31:44.141873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.516 [2024-10-07 11:31:44.141903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.516 [2024-10-07 11:31:44.147050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.148084] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.516 [2024-10-07 11:31:44.148150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.516 [2024-10-07 11:31:44.148185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.516 [2024-10-07 11:31:44.148560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.516 [2024-10-07 11:31:44.148773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.516 [2024-10-07 11:31:44.148828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.516 [2024-10-07 11:31:44.148860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.516 [2024-10-07 11:31:44.149030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.516 [2024-10-07 11:31:44.152141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.152337] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.516 [2024-10-07 11:31:44.152385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.516 [2024-10-07 11:31:44.152418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.516 [2024-10-07 11:31:44.153713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.516 [2024-10-07 11:31:44.153957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.516 [2024-10-07 11:31:44.153985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.516 [2024-10-07 11:31:44.154001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.516 [2024-10-07 11:31:44.154043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.516 [2024-10-07 11:31:44.159151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.159406] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.516 [2024-10-07 11:31:44.159446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.516 [2024-10-07 11:31:44.159465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.516 [2024-10-07 11:31:44.159499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.516 [2024-10-07 11:31:44.159532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.516 [2024-10-07 11:31:44.159551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.516 [2024-10-07 11:31:44.159581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.516 [2024-10-07 11:31:44.159615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.516 [2024-10-07 11:31:44.162579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.162718] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.516 [2024-10-07 11:31:44.162751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.516 [2024-10-07 11:31:44.162769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.516 [2024-10-07 11:31:44.162801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.516 [2024-10-07 11:31:44.162833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.516 [2024-10-07 11:31:44.162851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.516 [2024-10-07 11:31:44.162865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.516 [2024-10-07 11:31:44.162896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.516 [2024-10-07 11:31:44.169245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.169371] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.516 [2024-10-07 11:31:44.169403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.516 [2024-10-07 11:31:44.169420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.516 [2024-10-07 11:31:44.169453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.516 [2024-10-07 11:31:44.169486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.516 [2024-10-07 11:31:44.169504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.516 [2024-10-07 11:31:44.169518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.516 [2024-10-07 11:31:44.169548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.516 [2024-10-07 11:31:44.173293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.173421] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.516 [2024-10-07 11:31:44.173452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.516 [2024-10-07 11:31:44.173470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.516 [2024-10-07 11:31:44.173502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.516 [2024-10-07 11:31:44.173535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.516 [2024-10-07 11:31:44.173553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.516 [2024-10-07 11:31:44.173567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.516 [2024-10-07 11:31:44.173598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.516 [2024-10-07 11:31:44.179851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.179992] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.516 [2024-10-07 11:31:44.180024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.516 [2024-10-07 11:31:44.180043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.516 [2024-10-07 11:31:44.180076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.516 [2024-10-07 11:31:44.180129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.516 [2024-10-07 11:31:44.180151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.516 [2024-10-07 11:31:44.180166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.516 [2024-10-07 11:31:44.180196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.516 [2024-10-07 11:31:44.183397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.516 [2024-10-07 11:31:44.183510] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.517 [2024-10-07 11:31:44.183541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.517 [2024-10-07 11:31:44.183559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.517 [2024-10-07 11:31:44.183590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.517 [2024-10-07 11:31:44.183622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.517 [2024-10-07 11:31:44.183640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.517 [2024-10-07 11:31:44.183655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.517 [2024-10-07 11:31:44.183684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.517 [2024-10-07 11:31:44.189961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.517 [2024-10-07 11:31:44.190074] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.517 [2024-10-07 11:31:44.190106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.517 [2024-10-07 11:31:44.190124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.517 [2024-10-07 11:31:44.190336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.517 [2024-10-07 11:31:44.190480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.517 [2024-10-07 11:31:44.190516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.517 [2024-10-07 11:31:44.190534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.517 [2024-10-07 11:31:44.190653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.517 [2024-10-07 11:31:44.193774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.517 [2024-10-07 11:31:44.193893] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.517 [2024-10-07 11:31:44.193924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.517 [2024-10-07 11:31:44.193942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.517 [2024-10-07 11:31:44.193991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.517 [2024-10-07 11:31:44.194025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.517 [2024-10-07 11:31:44.194044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.517 [2024-10-07 11:31:44.194058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.517 [2024-10-07 11:31:44.194088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.517 [2024-10-07 11:31:44.201123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.517 [2024-10-07 11:31:44.201236] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.517 [2024-10-07 11:31:44.201266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.517 [2024-10-07 11:31:44.201284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.517 [2024-10-07 11:31:44.201330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.517 [2024-10-07 11:31:44.201367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.517 [2024-10-07 11:31:44.201386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.517 [2024-10-07 11:31:44.201400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.517 [2024-10-07 11:31:44.201431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.517 [2024-10-07 11:31:44.203858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.517 [2024-10-07 11:31:44.203980] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.517 [2024-10-07 11:31:44.204011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.517 [2024-10-07 11:31:44.204028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.517 [2024-10-07 11:31:44.204214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.517 [2024-10-07 11:31:44.204373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.517 [2024-10-07 11:31:44.204409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.517 [2024-10-07 11:31:44.204427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.517 [2024-10-07 11:31:44.204546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.517 [2024-10-07 11:31:44.211211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.517 [2024-10-07 11:31:44.211342] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.517 [2024-10-07 11:31:44.211375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.517 [2024-10-07 11:31:44.211393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.517 [2024-10-07 11:31:44.211426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.517 [2024-10-07 11:31:44.211458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.517 [2024-10-07 11:31:44.211477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.517 [2024-10-07 11:31:44.211491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.517 [2024-10-07 11:31:44.211537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.517 [2024-10-07 11:31:44.215165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.517 [2024-10-07 11:31:44.215278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.517 [2024-10-07 11:31:44.215310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.517 [2024-10-07 11:31:44.215344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.517 [2024-10-07 11:31:44.215378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.517 [2024-10-07 11:31:44.215411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.517 [2024-10-07 11:31:44.215429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.517 [2024-10-07 11:31:44.215443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.517 [2024-10-07 11:31:44.215474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.517 [2024-10-07 11:31:44.221659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.517 [2024-10-07 11:31:44.221781] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.517 [2024-10-07 11:31:44.221813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.517 [2024-10-07 11:31:44.221831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.517 [2024-10-07 11:31:44.221864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.517 [2024-10-07 11:31:44.221897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.517 [2024-10-07 11:31:44.221915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.517 [2024-10-07 11:31:44.221929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.517 [2024-10-07 11:31:44.221958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.517 [2024-10-07 11:31:44.225257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.517 [2024-10-07 11:31:44.225383] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.517 [2024-10-07 11:31:44.225415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.517 [2024-10-07 11:31:44.225433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.517 [2024-10-07 11:31:44.225465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.517 [2024-10-07 11:31:44.225497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.517 [2024-10-07 11:31:44.225515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.517 [2024-10-07 11:31:44.225530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.517 [2024-10-07 11:31:44.226728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.517 [2024-10-07 11:31:44.231752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.517 [2024-10-07 11:31:44.231879] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.517 [2024-10-07 11:31:44.231927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.517 [2024-10-07 11:31:44.231946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.517 [2024-10-07 11:31:44.232134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.517 [2024-10-07 11:31:44.232278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.517 [2024-10-07 11:31:44.232314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.517 [2024-10-07 11:31:44.232348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.517 [2024-10-07 11:31:44.232467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.517 [2024-10-07 11:31:44.235571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.517 [2024-10-07 11:31:44.235683] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.517 [2024-10-07 11:31:44.235714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.517 [2024-10-07 11:31:44.235733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.517 [2024-10-07 11:31:44.235765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.517 [2024-10-07 11:31:44.235814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.517 [2024-10-07 11:31:44.235836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.517 [2024-10-07 11:31:44.235850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.517 [2024-10-07 11:31:44.235880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.517 [2024-10-07 11:31:44.242881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.517 [2024-10-07 11:31:44.242995] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.517 [2024-10-07 11:31:44.243026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.517 [2024-10-07 11:31:44.243043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.517 [2024-10-07 11:31:44.243075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.517 [2024-10-07 11:31:44.243107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.517 [2024-10-07 11:31:44.243125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.517 [2024-10-07 11:31:44.243140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.517 [2024-10-07 11:31:44.243170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.517 [2024-10-07 11:31:44.245659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.517 [2024-10-07 11:31:44.245932] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.517 [2024-10-07 11:31:44.245975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.517 [2024-10-07 11:31:44.245994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.517 [2024-10-07 11:31:44.246125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.517 [2024-10-07 11:31:44.246266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.517 [2024-10-07 11:31:44.246311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.517 [2024-10-07 11:31:44.246343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.517 [2024-10-07 11:31:44.246406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.517 [2024-10-07 11:31:44.252970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.517 [2024-10-07 11:31:44.253084] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.517 [2024-10-07 11:31:44.253116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.517 [2024-10-07 11:31:44.253133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.517 [2024-10-07 11:31:44.253165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.517 [2024-10-07 11:31:44.253213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.517 [2024-10-07 11:31:44.253235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.517 [2024-10-07 11:31:44.253250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.517 [2024-10-07 11:31:44.254456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.517 [2024-10-07 11:31:44.256799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.517 [2024-10-07 11:31:44.256908] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.517 [2024-10-07 11:31:44.256939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.517 [2024-10-07 11:31:44.256957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.517 [2024-10-07 11:31:44.256988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.517 [2024-10-07 11:31:44.257020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.517 [2024-10-07 11:31:44.257038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.517 [2024-10-07 11:31:44.257053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.517 [2024-10-07 11:31:44.257083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.517 [2024-10-07 11:31:44.263226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.517 [2024-10-07 11:31:44.263353] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.517 [2024-10-07 11:31:44.263386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.517 [2024-10-07 11:31:44.263404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.517 [2024-10-07 11:31:44.263437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.517 [2024-10-07 11:31:44.263470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.517 [2024-10-07 11:31:44.263488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.517 [2024-10-07 11:31:44.263502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.517 [2024-10-07 11:31:44.263533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.517 [2024-10-07 11:31:44.266891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.517 [2024-10-07 11:31:44.267001] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.517 [2024-10-07 11:31:44.267032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.517 [2024-10-07 11:31:44.267050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.517 [2024-10-07 11:31:44.267081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.517 [2024-10-07 11:31:44.268271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.517 [2024-10-07 11:31:44.268310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.517 [2024-10-07 11:31:44.268342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.517 [2024-10-07 11:31:44.268551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.517 [2024-10-07 11:31:44.273329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.517 [2024-10-07 11:31:44.273595] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.517 [2024-10-07 11:31:44.273634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.517 [2024-10-07 11:31:44.273653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.517 [2024-10-07 11:31:44.273785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.273916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.273951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.273969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.274029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.277065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.277176] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.277206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.518 [2024-10-07 11:31:44.277224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.277256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.277288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.277306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.277334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.277367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.284369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.284483] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.284515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.518 [2024-10-07 11:31:44.284547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.284582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.284614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.284633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.284647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.284677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.287153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.287436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.287480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.518 [2024-10-07 11:31:44.287500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.287633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.287762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.287797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.287815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.287875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.294465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.294578] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.294610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.518 [2024-10-07 11:31:44.294628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.294660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.294691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.294710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.294724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.295912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.298268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.298400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.298433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.518 [2024-10-07 11:31:44.298450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.298483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.298515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.298533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.298561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.298594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.304707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.304821] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.304852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.518 [2024-10-07 11:31:44.304870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.304902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.304934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.304952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.304966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.304996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.308376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.308486] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.308517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.518 [2024-10-07 11:31:44.308535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.309733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.309964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.310004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.310023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.310852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.314801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.314912] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.314944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.518 [2024-10-07 11:31:44.314961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.315148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.315289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.315337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.315358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.315477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.318558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.318686] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.318717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.518 [2024-10-07 11:31:44.318735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.318767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.318799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.318817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.318831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.318861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.325878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.325990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.326021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.518 [2024-10-07 11:31:44.326039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.326071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.326103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.326121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.326135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.326165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.328665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.328773] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.328804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.518 [2024-10-07 11:31:44.328821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.329007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.329147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.329183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.329200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.329331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.335971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.336083] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.336114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.518 [2024-10-07 11:31:44.336132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.336178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.336211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.336229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.336243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.336273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.339829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.339943] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.339974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.518 [2024-10-07 11:31:44.339992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.340024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.340056] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.340080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.340094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.340124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.346277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.346432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.346465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.518 [2024-10-07 11:31:44.346483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.346515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.346564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.346586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.346601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.346632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.349917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.350026] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.350056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.518 [2024-10-07 11:31:44.350074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.350105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.350137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.350155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.350190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.351414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.356402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.356514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.356545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.518 [2024-10-07 11:31:44.356563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.356762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.356902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.356930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.356945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.357062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.360270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.360403] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.360435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.518 [2024-10-07 11:31:44.360452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.360485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.360517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.360535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.360549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.360580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.367593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.367706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.367737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.518 [2024-10-07 11:31:44.367754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.367786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.367818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.518 [2024-10-07 11:31:44.367836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.518 [2024-10-07 11:31:44.367851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.518 [2024-10-07 11:31:44.367880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.518 [2024-10-07 11:31:44.370376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.518 [2024-10-07 11:31:44.370485] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.518 [2024-10-07 11:31:44.370529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.518 [2024-10-07 11:31:44.370548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.518 [2024-10-07 11:31:44.370736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.518 [2024-10-07 11:31:44.370878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.519 [2024-10-07 11:31:44.370904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.519 [2024-10-07 11:31:44.370919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.519 [2024-10-07 11:31:44.371035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.519 [2024-10-07 11:31:44.377686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.519 [2024-10-07 11:31:44.377799] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.519 [2024-10-07 11:31:44.377831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.519 [2024-10-07 11:31:44.377848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.519 [2024-10-07 11:31:44.377880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.519 [2024-10-07 11:31:44.377912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.519 [2024-10-07 11:31:44.377930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.519 [2024-10-07 11:31:44.377945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.519 [2024-10-07 11:31:44.377976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.519 [2024-10-07 11:31:44.381738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.519 [2024-10-07 11:31:44.381860] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.519 [2024-10-07 11:31:44.381891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.519 [2024-10-07 11:31:44.381908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.519 [2024-10-07 11:31:44.381941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.519 [2024-10-07 11:31:44.381973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.519 [2024-10-07 11:31:44.381991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.519 [2024-10-07 11:31:44.382005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.519 [2024-10-07 11:31:44.382035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.519 [2024-10-07 11:31:44.388405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.519 [2024-10-07 11:31:44.388527] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.519 [2024-10-07 11:31:44.388558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.519 [2024-10-07 11:31:44.388576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.519 [2024-10-07 11:31:44.388608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.519 [2024-10-07 11:31:44.388661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.519 [2024-10-07 11:31:44.388682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.519 [2024-10-07 11:31:44.388697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.519 [2024-10-07 11:31:44.388727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.519 [2024-10-07 11:31:44.391838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.519 [2024-10-07 11:31:44.391951] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.519 [2024-10-07 11:31:44.391982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.519 [2024-10-07 11:31:44.392000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.519 [2024-10-07 11:31:44.392031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.519 [2024-10-07 11:31:44.392064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.519 [2024-10-07 11:31:44.392082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.519 [2024-10-07 11:31:44.392096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.519 [2024-10-07 11:31:44.392126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.519 [2024-10-07 11:31:44.398493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.519 [2024-10-07 11:31:44.398615] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.519 [2024-10-07 11:31:44.398646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.519 [2024-10-07 11:31:44.398664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.519 [2024-10-07 11:31:44.398710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.519 [2024-10-07 11:31:44.398745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.519 [2024-10-07 11:31:44.398763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.519 [2024-10-07 11:31:44.398777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.519 [2024-10-07 11:31:44.398807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.519 [2024-10-07 11:31:44.402607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.519 [2024-10-07 11:31:44.402744] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.519 [2024-10-07 11:31:44.402776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.519 [2024-10-07 11:31:44.402794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.519 [2024-10-07 11:31:44.402827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.519 [2024-10-07 11:31:44.402872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.519 [2024-10-07 11:31:44.402892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.519 [2024-10-07 11:31:44.402906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.519 [2024-10-07 11:31:44.402937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.519 [2024-10-07 11:31:44.410023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.519 [2024-10-07 11:31:44.410147] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.519 [2024-10-07 11:31:44.410178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.519 [2024-10-07 11:31:44.410196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.519 [2024-10-07 11:31:44.410228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.519 [2024-10-07 11:31:44.410272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.519 [2024-10-07 11:31:44.410306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.519 [2024-10-07 11:31:44.410348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.519 [2024-10-07 11:31:44.410383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.519 [2024-10-07 11:31:44.412701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.519 [2024-10-07 11:31:44.412810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.519 [2024-10-07 11:31:44.412840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.519 [2024-10-07 11:31:44.412857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.519 [2024-10-07 11:31:44.412890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.519 [2024-10-07 11:31:44.412922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.519 [2024-10-07 11:31:44.412940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.519 [2024-10-07 11:31:44.412954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.519 [2024-10-07 11:31:44.412984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.519 [2024-10-07 11:31:44.420123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.519 [2024-10-07 11:31:44.420237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.519 [2024-10-07 11:31:44.420268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.519 [2024-10-07 11:31:44.420286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.519 [2024-10-07 11:31:44.420334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.519 [2024-10-07 11:31:44.420381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.519 [2024-10-07 11:31:44.420402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.519 [2024-10-07 11:31:44.420417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.519 [2024-10-07 11:31:44.420447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.519 [2024-10-07 11:31:44.424125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.519 [2024-10-07 11:31:44.424277] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.519 [2024-10-07 11:31:44.424309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.519 [2024-10-07 11:31:44.424362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.519 [2024-10-07 11:31:44.424398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.519 [2024-10-07 11:31:44.424432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.519 [2024-10-07 11:31:44.424450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.519 [2024-10-07 11:31:44.424465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.519 [2024-10-07 11:31:44.424496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.519 [2024-10-07 11:31:44.430870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.519 [2024-10-07 11:31:44.430991] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.519 [2024-10-07 11:31:44.431024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.519 [2024-10-07 11:31:44.431041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.519 [2024-10-07 11:31:44.431074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.519 [2024-10-07 11:31:44.431106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.519 [2024-10-07 11:31:44.431124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.519 [2024-10-07 11:31:44.431138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.519 [2024-10-07 11:31:44.431168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.519 [2024-10-07 11:31:44.434216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.519 [2024-10-07 11:31:44.434348] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.519 [2024-10-07 11:31:44.434380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.519 [2024-10-07 11:31:44.434398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.519 [2024-10-07 11:31:44.434431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.519 [2024-10-07 11:31:44.434463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.519 [2024-10-07 11:31:44.434481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.519 [2024-10-07 11:31:44.434495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.519 [2024-10-07 11:31:44.434525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.519 [2024-10-07 11:31:44.440961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.519 [2024-10-07 11:31:44.441077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.519 [2024-10-07 11:31:44.441108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.519 [2024-10-07 11:31:44.441126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.519 [2024-10-07 11:31:44.441157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.519 [2024-10-07 11:31:44.441190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.519 [2024-10-07 11:31:44.441228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.519 [2024-10-07 11:31:44.441244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.519 [2024-10-07 11:31:44.441446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.519 [2024-10-07 11:31:44.444909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.519 [2024-10-07 11:31:44.445029] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.519 [2024-10-07 11:31:44.445060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.519 [2024-10-07 11:31:44.445078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.519 [2024-10-07 11:31:44.445110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.519 [2024-10-07 11:31:44.445142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.519 [2024-10-07 11:31:44.445163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.519 [2024-10-07 11:31:44.445177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.519 [2024-10-07 11:31:44.445207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.519 [2024-10-07 11:31:44.452360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.519 [2024-10-07 11:31:44.452473] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.519 [2024-10-07 11:31:44.452504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.519 [2024-10-07 11:31:44.452522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.519 [2024-10-07 11:31:44.452554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.519 [2024-10-07 11:31:44.452586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.519 [2024-10-07 11:31:44.452604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.519 [2024-10-07 11:31:44.452618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.519 [2024-10-07 11:31:44.452648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.519 [2024-10-07 11:31:44.454997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.519 [2024-10-07 11:31:44.455114] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.519 [2024-10-07 11:31:44.455144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.519 [2024-10-07 11:31:44.455162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.519 [2024-10-07 11:31:44.455208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.519 [2024-10-07 11:31:44.455242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.519 [2024-10-07 11:31:44.455260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.519 [2024-10-07 11:31:44.455275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.519 [2024-10-07 11:31:44.455304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.519 [2024-10-07 11:31:44.462452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.519 [2024-10-07 11:31:44.462586] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.519 [2024-10-07 11:31:44.462617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.519 [2024-10-07 11:31:44.462635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.519 [2024-10-07 11:31:44.462670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.519 [2024-10-07 11:31:44.462702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.519 [2024-10-07 11:31:44.462721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.519 [2024-10-07 11:31:44.462735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.519 [2024-10-07 11:31:44.462764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.519 [2024-10-07 11:31:44.466461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.519 [2024-10-07 11:31:44.466575] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.519 [2024-10-07 11:31:44.466606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.519 [2024-10-07 11:31:44.466623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.519 [2024-10-07 11:31:44.466663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.519 [2024-10-07 11:31:44.466696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.466716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.466729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.466759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.473099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.473228] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.473259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.520 [2024-10-07 11:31:44.473277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.473309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.520 [2024-10-07 11:31:44.473359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.473378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.473393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.473424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.476555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.476664] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.476695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.520 [2024-10-07 11:31:44.476712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.476752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.520 [2024-10-07 11:31:44.476793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.476812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.476826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.476856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.483188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.483301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.483347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.520 [2024-10-07 11:31:44.483366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.483398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.520 [2024-10-07 11:31:44.483430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.483448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.483463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.483492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.487198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.487334] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.487367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.520 [2024-10-07 11:31:44.487385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.487418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.520 [2024-10-07 11:31:44.487450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.487468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.487482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.487512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.494610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.494728] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.494759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.520 [2024-10-07 11:31:44.494777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.494810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.520 [2024-10-07 11:31:44.494846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.494864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.494895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.494928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.497292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.497413] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.497445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.520 [2024-10-07 11:31:44.497463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.497494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.520 8778.25 IOPS, 34.29 MiB/s [2024-10-07T11:31:53.043Z] [2024-10-07 11:31:44.499299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.499344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.499364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.499524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.504707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.504819] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.504850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.520 [2024-10-07 11:31:44.504868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.504900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.520 [2024-10-07 11:31:44.504941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.504959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.504974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.505003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.508708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.508858] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.508895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.520 [2024-10-07 11:31:44.508913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.508946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.520 [2024-10-07 11:31:44.508979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.508997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.509011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.509042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.515474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.515682] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.515753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.520 [2024-10-07 11:31:44.515775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.515817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.520 [2024-10-07 11:31:44.515856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.515874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.515889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.515920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.518912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.519023] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.519054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.520 [2024-10-07 11:31:44.519072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.519104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.520 [2024-10-07 11:31:44.519136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.519154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.519168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.519198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.525569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.525686] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.525719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.520 [2024-10-07 11:31:44.525737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.525769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.520 [2024-10-07 11:31:44.525801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.525819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.525834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.525864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.529747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.529870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.529901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.520 [2024-10-07 11:31:44.529919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.529951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.520 [2024-10-07 11:31:44.530002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.530021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.530035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.530066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.537191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.537375] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.537418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.520 [2024-10-07 11:31:44.537438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.537472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.520 [2024-10-07 11:31:44.537505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.537524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.537545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.537597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.539839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.539953] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.539984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.520 [2024-10-07 11:31:44.540002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.540034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.520 [2024-10-07 11:31:44.540066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.540083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.540098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.540128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.547389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.547504] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.547536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.520 [2024-10-07 11:31:44.547553] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.547592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.520 [2024-10-07 11:31:44.547625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.547643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.547657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.547704] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.551525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.551641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.551673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.520 [2024-10-07 11:31:44.551690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.551722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.520 [2024-10-07 11:31:44.551755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.551773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.551787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.551817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.558188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.558355] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.558389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.520 [2024-10-07 11:31:44.558417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.558452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.520 [2024-10-07 11:31:44.558484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.558503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.558517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.558547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.561616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.561725] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.561756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.520 [2024-10-07 11:31:44.561773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.561804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.520 [2024-10-07 11:31:44.561848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.561866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.561881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.561910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.568278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.568404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.568436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.520 [2024-10-07 11:31:44.568472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.520 [2024-10-07 11:31:44.568506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.520 [2024-10-07 11:31:44.568697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.520 [2024-10-07 11:31:44.568735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.520 [2024-10-07 11:31:44.568753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.520 [2024-10-07 11:31:44.568887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.520 [2024-10-07 11:31:44.572257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.520 [2024-10-07 11:31:44.572391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.520 [2024-10-07 11:31:44.572423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.520 [2024-10-07 11:31:44.572441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.572473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.572506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.572524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.572538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.572568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.579627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.579743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.579774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.521 [2024-10-07 11:31:44.579791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.579823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.579856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.579874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.579888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.579919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.582360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.582469] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.582501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.521 [2024-10-07 11:31:44.582518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.582714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.582868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.582916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.582935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.583055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.589722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.589840] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.589872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.521 [2024-10-07 11:31:44.589889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.589921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.589953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.589972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.589986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.590016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.593635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.593750] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.593782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.521 [2024-10-07 11:31:44.593799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.593843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.593876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.593894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.593908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.593937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.600269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.600416] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.600449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.521 [2024-10-07 11:31:44.600467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.600500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.600533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.600551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.600566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.600596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.603727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.603860] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.603892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.521 [2024-10-07 11:31:44.603910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.603942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.603975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.603993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.604007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.604037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.610386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.610499] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.610531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.521 [2024-10-07 11:31:44.610549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.610597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.610633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.610651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.610666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.610849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.614367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.614489] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.614520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.521 [2024-10-07 11:31:44.614538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.614571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.614620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.614642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.614656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.614687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.621746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.621862] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.621894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.521 [2024-10-07 11:31:44.621911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.621964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.621998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.622017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.622032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.622062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.624457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.624567] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.624598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.521 [2024-10-07 11:31:44.624615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.624648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.624680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.624698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.624712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.624905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.631840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.631953] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.631985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.521 [2024-10-07 11:31:44.632002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.632034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.632066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.632085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.632099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.632129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.635889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.636006] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.636037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.521 [2024-10-07 11:31:44.636054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.636086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.636118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.636136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.636169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.636202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.642588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.642711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.642744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.521 [2024-10-07 11:31:44.642762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.642794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.642827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.642846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.642860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.642890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.645978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.646090] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.646121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.521 [2024-10-07 11:31:44.646138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.646170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.646202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.646220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.646234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.646264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.652681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.652794] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.652825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.521 [2024-10-07 11:31:44.652843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.652874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.652918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.652954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.652969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.653002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.656792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.656913] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.656965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.521 [2024-10-07 11:31:44.656984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.657017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.657049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.657067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.657081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.657112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.664200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.664330] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.664362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.521 [2024-10-07 11:31:44.664379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.664413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.664445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.664463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.664477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.664507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.666885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.667010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.667040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc22e0 with addr=10.0.0.3, port=4420 00:20:57.521 [2024-10-07 11:31:44.667058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc22e0 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.667090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc22e0 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.667122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.667139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.667153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.667374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.674329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.674436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.521 [2024-10-07 11:31:44.674468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.521 [2024-10-07 11:31:44.674485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.521 [2024-10-07 11:31:44.674516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.521 [2024-10-07 11:31:44.674566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.521 [2024-10-07 11:31:44.674587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.521 [2024-10-07 11:31:44.674601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.521 [2024-10-07 11:31:44.674631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.521 [2024-10-07 11:31:44.678384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.684433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.521 [2024-10-07 11:31:44.685121] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.685167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.685188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.685386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.685512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.685543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.685560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.685604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.522 [2024-10-07 11:31:44.692810] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:57.522 [2024-10-07 11:31:44.695189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.522 [2024-10-07 11:31:44.695312] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.695359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.695377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.695415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.695452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.695471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.695485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.695520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.522 [2024-10-07 11:31:44.707778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.522 [2024-10-07 11:31:44.709693] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.709746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.709769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.710149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.711960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.712033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.712053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.712227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.522 [2024-10-07 11:31:44.719162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.522 [2024-10-07 11:31:44.719382] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.719415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.719433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.719547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.719626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.719662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.719679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.719716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.522 [2024-10-07 11:31:44.729904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.522 [2024-10-07 11:31:44.730217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.730263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.730294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.730424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.730492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.730516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.730531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.730568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.522 [2024-10-07 11:31:44.740007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.522 [2024-10-07 11:31:44.740127] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.740159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.740177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.740214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.740250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.740268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.740282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.740334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.522 [2024-10-07 11:31:44.750350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.522 [2024-10-07 11:31:44.751026] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.751072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.751092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.751257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.751392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.751416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.751431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.751475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.522 [2024-10-07 11:31:44.760519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.522 [2024-10-07 11:31:44.760637] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.760669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.760686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.760723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.760759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.760778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.760792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.760826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.522 [2024-10-07 11:31:44.771581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.522 [2024-10-07 11:31:44.771705] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.771736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.771753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.771789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.771825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.771844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.771859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.771893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.522 [2024-10-07 11:31:44.782423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.522 [2024-10-07 11:31:44.782543] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.782575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.782593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.782639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.782684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.782703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.782718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.782752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.522 [2024-10-07 11:31:44.792755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.522 [2024-10-07 11:31:44.792892] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.792935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.792953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.793209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.793376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.793407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.793424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.793535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.522 [2024-10-07 11:31:44.802888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.522 [2024-10-07 11:31:44.803006] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.803038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.803056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.803092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.803127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.803145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.803160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.803194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.522 [2024-10-07 11:31:44.813655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.522 [2024-10-07 11:31:44.813777] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.813809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.813826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.813862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.813898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.813917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.813948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.813987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.522 [2024-10-07 11:31:44.824358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.522 [2024-10-07 11:31:44.824486] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.824518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.824536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.824572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.824608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.824626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.824641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.824675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.522 [2024-10-07 11:31:44.834831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.522 [2024-10-07 11:31:44.834957] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.834988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.835006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.835261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.835435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.835471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.835489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.835602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.522 [2024-10-07 11:31:44.844934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.522 [2024-10-07 11:31:44.845050] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.845083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.845101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.845137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.845182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.845200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.845214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.845249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.522 [2024-10-07 11:31:44.855792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.522 [2024-10-07 11:31:44.855915] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.855964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.855984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.856021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.856057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.856076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.856090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.856125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.522 [2024-10-07 11:31:44.866477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.522 [2024-10-07 11:31:44.866602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.866634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.866651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.866687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.866725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.866743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.866757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.866791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.522 [2024-10-07 11:31:44.876901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.522 [2024-10-07 11:31:44.877020] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.522 [2024-10-07 11:31:44.877052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.522 [2024-10-07 11:31:44.877070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.522 [2024-10-07 11:31:44.877344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.522 [2024-10-07 11:31:44.877508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.522 [2024-10-07 11:31:44.877551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.522 [2024-10-07 11:31:44.877569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.522 [2024-10-07 11:31:44.877681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:44.887005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:44.887122] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:44.887154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:44.887172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:44.887213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:44.887269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:44.887288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:44.887303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:44.887352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:44.897835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:44.897953] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:44.897985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:44.898003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:44.898040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:44.898076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:44.898095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:44.898109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:44.898143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:44.908522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:44.908642] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:44.908674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:44.908692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:44.908727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:44.908764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:44.908781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:44.908796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:44.908830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:44.918974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:44.919093] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:44.919125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:44.919143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:44.919414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:44.919587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:44.919623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:44.919641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:44.919753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:44.929074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:44.929192] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:44.929224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:44.929241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:44.929276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:44.929312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:44.929348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:44.929363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:44.929399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:44.939903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:44.940025] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:44.940058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:44.940075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:44.940111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:44.940147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:44.940167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:44.940181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:44.940215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:44.950546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:44.950666] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:44.950698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:44.950715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:44.950751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:44.950787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:44.950806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:44.950820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:44.950855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:44.960878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:44.961005] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:44.961037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:44.961072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:44.961368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:44.961538] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:44.961573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:44.961591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:44.961703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:44.970989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:44.971107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:44.971139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:44.971156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:44.971191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:44.971228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:44.971246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:44.971260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:44.971294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:44.981698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:44.981818] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:44.981850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:44.981867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:44.981904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:44.981940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:44.981959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:44.981973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:44.982007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:44.992240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:44.992372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:44.992404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:44.992422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:44.992459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:44.992496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:44.992532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:44.992548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:44.992585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:45.002724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:45.002842] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:45.002874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:45.002891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:45.003147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:45.003310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:45.003358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:45.003376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:45.003488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:45.012822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:45.012941] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:45.012973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:45.012991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:45.013027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:45.013072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:45.013089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:45.013104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:45.013140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:45.023535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:45.023655] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:45.023687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:45.023704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:45.023740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:45.023776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:45.023795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:45.023810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:45.023844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:45.034104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:45.034242] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:45.034275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:45.034313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:45.034369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:45.034406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:45.034425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:45.034439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:45.034474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:45.044525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:45.044648] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:45.044680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:45.044701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:45.044958] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:45.045109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:45.045146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:45.045164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:45.045277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:45.054711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:45.054831] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:45.054863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:45.054881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:45.054917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:45.054953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:45.054971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:45.054986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:45.055020] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:45.065497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:45.065617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:45.065649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:45.065667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:45.065720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:45.065758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:45.065777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:45.065791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:45.065827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:45.076055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:45.076173] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:45.076205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:45.076223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:45.076270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:45.076308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:45.076342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:45.076357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:45.076397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:45.086417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:45.086537] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:45.086569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.523 [2024-10-07 11:31:45.086586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.523 [2024-10-07 11:31:45.086842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.523 [2024-10-07 11:31:45.086994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.523 [2024-10-07 11:31:45.087030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.523 [2024-10-07 11:31:45.087048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.523 [2024-10-07 11:31:45.087161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.523 [2024-10-07 11:31:45.096519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.523 [2024-10-07 11:31:45.096644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.523 [2024-10-07 11:31:45.096676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.096693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.096729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.096766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.096785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.096831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.096869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.107246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.107380] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.107412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.107430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.107466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.107502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.107520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.107535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.107569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.117844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.117964] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.117996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.118013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.118049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.118085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.118108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.118123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.118157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.128190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.128338] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.128370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.128388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.128646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.128822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.128858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.128875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.128987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.138300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.138431] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.138480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.138499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.138537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.138573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.138592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.138606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.138640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.149008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.149140] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.149172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.149191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.149227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.149264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.149282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.149297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.149348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.159630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.159753] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.159786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.159803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.159839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.159875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.159894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.159908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.159942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.170024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.170143] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.170175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.170192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.170497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.170685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.170722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.170740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.170852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.180133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.180252] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.180284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.180301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.180351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.180390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.180409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.180423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.180457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.190828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.190945] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.190977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.190995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.191031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.191067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.191086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.191101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.191134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.201607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.201727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.201759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.201777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.201813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.201859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.201877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.201891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.201925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.211995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.212116] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.212148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.212165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.212446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.212610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.212646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.212664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.212776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.222163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.222283] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.222338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.222357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.222395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.222433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.222458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.222472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.222507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.233022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.233155] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.233187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.233205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.233242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.233279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.233298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.233312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.233366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.244376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.244555] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.244606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.244687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.245725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.245946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.245993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.246011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.246989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.254829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.255032] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.255076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.255097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.255141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.255179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.255198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.255213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.255247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.265848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.265971] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.266004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.266022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.266058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.266095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.266113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.266128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.266162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.275951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.276072] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.276105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.276123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.276159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.276196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.276232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.276248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.276284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.286059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.286180] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.286213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.286231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.286267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.286332] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.286355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.286370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.524 [2024-10-07 11:31:45.286404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.524 [2024-10-07 11:31:45.296381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.524 [2024-10-07 11:31:45.296500] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.524 [2024-10-07 11:31:45.296532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.524 [2024-10-07 11:31:45.296550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.524 [2024-10-07 11:31:45.296807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.524 [2024-10-07 11:31:45.296971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.524 [2024-10-07 11:31:45.297006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.524 [2024-10-07 11:31:45.297024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.297137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.306545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.306664] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.306696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.306713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.306749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.306786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.306804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.306818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.306853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.317406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.317550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.317584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.317602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.317638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.317675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.317694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.317708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.317743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.328071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.328190] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.328222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.328240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.328275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.328312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.328346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.328362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.328397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.338522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.338641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.338673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.338691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.338948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.339113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.339149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.339166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.339278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.348669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.348787] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.348820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.348837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.348893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.348930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.348949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.348963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.348999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.359518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.359646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.359679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.359698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.359734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.359771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.359790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.359805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.359839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.370910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.371039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.371072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.371090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.371128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.371165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.371184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.371199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.371234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.382990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.383115] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.383147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.383166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.383204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.383242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.383260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.383298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.383357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.394279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.394492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.394531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.394549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.394585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.394622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.394640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.394654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.394689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.405278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.405411] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.405444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.405461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.405497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.405533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.405552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.405566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.405600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.415970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.416089] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.416120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.416137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.416173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.416209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.416227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.416241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.416275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.426456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.426575] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.426623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.426642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.426899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.427052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.427088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.427107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.427218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.436614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.436739] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.436771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.436788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.436823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.436861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.436878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.436892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.436925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.447433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.447552] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.447583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.447600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.447636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.447671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.447689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.447704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.447738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.458010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.458131] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.458162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.458180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.458216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.458271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.458306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.458337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.458376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.468347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.468466] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.468498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.468516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.468772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.468924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.468961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.468979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.469097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.478454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.478572] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.478604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.478621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.478657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.478694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.478711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.478725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.479467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.488996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.489122] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.489154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.489171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.489207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.489244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.489262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.489276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.489347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 8811.89 IOPS, 34.42 MiB/s [2024-10-07T11:31:53.048Z] [2024-10-07 11:31:45.501966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.502253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.502298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.502332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.503257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.503493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.503520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.503534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.503571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.513451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.513572] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.513604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.525 [2024-10-07 11:31:45.513621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.525 [2024-10-07 11:31:45.513657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.525 [2024-10-07 11:31:45.513692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.525 [2024-10-07 11:31:45.513711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.525 [2024-10-07 11:31:45.513726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.525 [2024-10-07 11:31:45.513760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.525 [2024-10-07 11:31:45.523781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.525 [2024-10-07 11:31:45.523902] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.525 [2024-10-07 11:31:45.523933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.523951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.524206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.524374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.524411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.524428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.524541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.533892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.534009] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.534041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.534080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.534128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.534165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.534183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.534197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.534231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.544835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.544953] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.544984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.545002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.545038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.545074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.545092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.545106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.545140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.555512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.555639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.555670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.555688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.555723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.555758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.555776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.555790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.555824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.565967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.566088] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.566119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.566136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.566420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.566575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.566619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.566636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.566749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.576082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.576200] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.576231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.576249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.576285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.576338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.576360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.576374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.576410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.586999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.587120] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.587152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.587169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.587205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.587243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.587261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.587275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.587309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.597738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.597859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.597891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.597909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.597945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.597981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.598000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.598014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.598048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.608210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.608346] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.608379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.608397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.608434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.608470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.608488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.608502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.608757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.618525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.618645] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.618676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.618694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.618730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.618776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.618794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.618808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.618842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.629413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.629533] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.629565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.629583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.629620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.629657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.629675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.629689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.629723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.640159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.640278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.640310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.640344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.640407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.640445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.640463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.640477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.640530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.650642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.650762] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.650793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.650811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.651078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.651230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.651266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.651284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.651420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.660790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.660911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.660942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.660960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.660996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.661033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.661051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.661065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.661099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.671643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.671762] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.671794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.671812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.671848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.671884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.671903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.671935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.671972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.682259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.682403] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.682437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.682454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.682490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.682526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.682544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.682558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.682594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.692692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.692812] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.692843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.692861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.693119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.693272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.693297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.693312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.693441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.702837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.702957] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.702989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.703007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.703043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.703080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.703098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.703113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.703147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.713626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.713745] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.713794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.713813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.713850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.713887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.713905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.526 [2024-10-07 11:31:45.713919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.526 [2024-10-07 11:31:45.713953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.526 [2024-10-07 11:31:45.724278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.526 [2024-10-07 11:31:45.724410] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.526 [2024-10-07 11:31:45.724443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.526 [2024-10-07 11:31:45.724461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.526 [2024-10-07 11:31:45.724497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.526 [2024-10-07 11:31:45.724534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.526 [2024-10-07 11:31:45.724552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.724566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.724600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.734851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.734969] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.735001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.735019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.735055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.735092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.735110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.735124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.735393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.745134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.745252] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.745283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.745301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.745352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.745410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.745430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.745444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.745478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.756032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.756153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.756185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.756203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.756239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.756275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.756293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.756307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.756359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.767333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.767510] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.767558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.767589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.767649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.768712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.768771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.768803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.769064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.777833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.777977] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.778012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.778030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.778068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.778105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.778124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.778138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.778198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.788816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.788940] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.788972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.788990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.789026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.789062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.789080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.789094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.789129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.798922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.799040] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.799072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.799089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.799125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.799160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.799178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.799193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.799227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.809028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.809146] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.809178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.809195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.809231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.809266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.809285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.809299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.809859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.819464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.819585] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.819618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.819655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.819931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.820087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.820123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.820140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.820254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.829588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.829710] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.829741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.829759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.829795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.829831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.829850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.829864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.829898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.840432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.840552] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.840585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.840602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.840639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.840675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.840693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.840707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.840741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.851029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.851148] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.851180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.851198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.851233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.851270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.851306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.851346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.851384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.861469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.861586] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.861619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.861647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.861902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.862066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.862101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.862118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.862229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.871631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.871749] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.871781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.871798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.871835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.871871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.871890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.871904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.871939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.882718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.882847] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.882879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.882897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.882934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.882970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.882988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.883003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.883037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.893352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.893472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.893504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.893522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.893557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.893594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.893612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.893626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.893670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.903920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.904042] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.904082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.904100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.904370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.904525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.904560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.904578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.904701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.914125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.914232] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.914263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.914280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.914349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.914389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.914408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.914422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.914456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.925104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.925225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.925257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.925279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.925352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.925393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.925411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.925425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.925459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.935861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.935982] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.527 [2024-10-07 11:31:45.936014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.527 [2024-10-07 11:31:45.936032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.527 [2024-10-07 11:31:45.936068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.936103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.527 [2024-10-07 11:31:45.936121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.527 [2024-10-07 11:31:45.936135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.527 [2024-10-07 11:31:45.936169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.527 [2024-10-07 11:31:45.946459] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe16a00 was disconnected and freed. reset controller. 00:20:57.527 [2024-10-07 11:31:45.946611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.527 [2024-10-07 11:31:45.946679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.527 [2024-10-07 11:31:45.946943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:45.947250] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:45.947293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.528 [2024-10-07 11:31:45.947313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:45.947470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:45.947521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.528 [2024-10-07 11:31:45.947542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.528 [2024-10-07 11:31:45.947557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.528 [2024-10-07 11:31:45.947588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.528 [2024-10-07 11:31:45.950159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:45.950280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.528 [2024-10-07 11:31:45.950331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.528 [2024-10-07 11:31:45.950350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.528 [2024-10-07 11:31:45.950403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.528 [2024-10-07 11:31:45.957069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:45.957142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:45.957222] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:45.957250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.528 [2024-10-07 11:31:45.957267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:45.957345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:45.957374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.528 [2024-10-07 11:31:45.957390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:45.957409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:45.958140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:45.958178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.528 [2024-10-07 11:31:45.958194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.528 [2024-10-07 11:31:45.958209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.528 [2024-10-07 11:31:45.958417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.528 [2024-10-07 11:31:45.958444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.528 [2024-10-07 11:31:45.958459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.528 [2024-10-07 11:31:45.958472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.528 [2024-10-07 11:31:45.958563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.528 [2024-10-07 11:31:45.967700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:45.967750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:45.967843] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:45.967873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.528 [2024-10-07 11:31:45.967890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:45.967938] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:45.967961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.528 [2024-10-07 11:31:45.967976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:45.968008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:45.968031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:45.968057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.528 [2024-10-07 11:31:45.968091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.528 [2024-10-07 11:31:45.968107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.528 [2024-10-07 11:31:45.968123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.528 [2024-10-07 11:31:45.968137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.528 [2024-10-07 11:31:45.968150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.528 [2024-10-07 11:31:45.968181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.528 [2024-10-07 11:31:45.968198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.528 [2024-10-07 11:31:45.978458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:45.978513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:45.978608] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:45.978639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.528 [2024-10-07 11:31:45.978657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:45.978705] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:45.978728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.528 [2024-10-07 11:31:45.978743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:45.978775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:45.978798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:45.978825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.528 [2024-10-07 11:31:45.978843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.528 [2024-10-07 11:31:45.978857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.528 [2024-10-07 11:31:45.978873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.528 [2024-10-07 11:31:45.978887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.528 [2024-10-07 11:31:45.978901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.528 [2024-10-07 11:31:45.978930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.528 [2024-10-07 11:31:45.978947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.528 [2024-10-07 11:31:45.988721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:45.988770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:45.988862] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:45.988893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.528 [2024-10-07 11:31:45.988910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:45.988957] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:45.988998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.528 [2024-10-07 11:31:45.989016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:45.989269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:45.989300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:45.989452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.528 [2024-10-07 11:31:45.989489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.528 [2024-10-07 11:31:45.989507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.528 [2024-10-07 11:31:45.989524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.528 [2024-10-07 11:31:45.989538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.528 [2024-10-07 11:31:45.989552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.528 [2024-10-07 11:31:45.989660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.528 [2024-10-07 11:31:45.989680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.528 [2024-10-07 11:31:45.998847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:45.998896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:45.998987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:45.999017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.528 [2024-10-07 11:31:45.999034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:45.999082] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:45.999105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.528 [2024-10-07 11:31:45.999120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:45.999151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:45.999174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:45.999201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.528 [2024-10-07 11:31:45.999219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.528 [2024-10-07 11:31:45.999233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.528 [2024-10-07 11:31:45.999250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.528 [2024-10-07 11:31:45.999264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.528 [2024-10-07 11:31:45.999277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.528 [2024-10-07 11:31:46.000015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.528 [2024-10-07 11:31:46.000053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.528 [2024-10-07 11:31:46.009583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:46.009642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:46.009735] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:46.009766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.528 [2024-10-07 11:31:46.009783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:46.009831] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:46.009854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.528 [2024-10-07 11:31:46.009869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:46.009901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:46.009925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:46.009965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.528 [2024-10-07 11:31:46.009985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.528 [2024-10-07 11:31:46.009999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.528 [2024-10-07 11:31:46.010015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.528 [2024-10-07 11:31:46.010029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.528 [2024-10-07 11:31:46.010043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.528 [2024-10-07 11:31:46.010072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.528 [2024-10-07 11:31:46.010089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.528 [2024-10-07 11:31:46.020155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:46.020205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:46.020296] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:46.020343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.528 [2024-10-07 11:31:46.020362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:46.020412] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:46.020435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.528 [2024-10-07 11:31:46.020450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:46.020483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:46.020506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:46.020533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.528 [2024-10-07 11:31:46.020552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.528 [2024-10-07 11:31:46.020583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.528 [2024-10-07 11:31:46.020601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.528 [2024-10-07 11:31:46.020615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.528 [2024-10-07 11:31:46.020628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.528 [2024-10-07 11:31:46.020660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.528 [2024-10-07 11:31:46.020677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.528 [2024-10-07 11:31:46.030513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:46.030563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:46.030656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:46.030687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.528 [2024-10-07 11:31:46.030704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:46.030751] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:46.030775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.528 [2024-10-07 11:31:46.030790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:46.031051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:46.031083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:46.031217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.528 [2024-10-07 11:31:46.031243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.528 [2024-10-07 11:31:46.031258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.528 [2024-10-07 11:31:46.031275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.528 [2024-10-07 11:31:46.031289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.528 [2024-10-07 11:31:46.031302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.528 [2024-10-07 11:31:46.031424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.528 [2024-10-07 11:31:46.031447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.528 [2024-10-07 11:31:46.040635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:46.040709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.528 [2024-10-07 11:31:46.040787] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:46.040814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.528 [2024-10-07 11:31:46.040831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:46.040895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.528 [2024-10-07 11:31:46.040922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.528 [2024-10-07 11:31:46.040955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.528 [2024-10-07 11:31:46.040975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:46.041724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.528 [2024-10-07 11:31:46.041767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.529 [2024-10-07 11:31:46.041785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.529 [2024-10-07 11:31:46.041799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.529 [2024-10-07 11:31:46.041970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.529 [2024-10-07 11:31:46.041995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.529 [2024-10-07 11:31:46.042010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.529 [2024-10-07 11:31:46.042023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.529 [2024-10-07 11:31:46.042132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.529 [2024-10-07 11:31:46.051285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.529 [2024-10-07 11:31:46.051345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.529 [2024-10-07 11:31:46.051440] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.529 [2024-10-07 11:31:46.051472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.529 [2024-10-07 11:31:46.051489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.529 [2024-10-07 11:31:46.051536] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.529 [2024-10-07 11:31:46.051559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.529 [2024-10-07 11:31:46.051575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.529 [2024-10-07 11:31:46.051607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.529 [2024-10-07 11:31:46.051630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.529 [2024-10-07 11:31:46.051657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.529 [2024-10-07 11:31:46.051674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.529 [2024-10-07 11:31:46.051688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.529 [2024-10-07 11:31:46.051705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.529 [2024-10-07 11:31:46.051719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.529 [2024-10-07 11:31:46.051732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.529 [2024-10-07 11:31:46.051761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.529 [2024-10-07 11:31:46.051778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.529 [2024-10-07 11:31:46.061740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.529 [2024-10-07 11:31:46.061809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.529 [2024-10-07 11:31:46.061904] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.529 [2024-10-07 11:31:46.061935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.529 [2024-10-07 11:31:46.061952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.529 [2024-10-07 11:31:46.062000] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.529 [2024-10-07 11:31:46.062022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.529 [2024-10-07 11:31:46.062037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.529 [2024-10-07 11:31:46.062069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.529 [2024-10-07 11:31:46.062092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.529 [2024-10-07 11:31:46.062120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.529 [2024-10-07 11:31:46.062137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.529 [2024-10-07 11:31:46.062152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.529 [2024-10-07 11:31:46.062167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.529 [2024-10-07 11:31:46.062182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.529 [2024-10-07 11:31:46.062194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.529 [2024-10-07 11:31:46.062225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.529 [2024-10-07 11:31:46.062242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.529 [2024-10-07 11:31:46.071991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.529 [2024-10-07 11:31:46.072040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.529 [2024-10-07 11:31:46.072132] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.529 [2024-10-07 11:31:46.072162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.529 [2024-10-07 11:31:46.072179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.529 [2024-10-07 11:31:46.072227] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.529 [2024-10-07 11:31:46.072250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.529 [2024-10-07 11:31:46.072265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.529 [2024-10-07 11:31:46.072545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.529 [2024-10-07 11:31:46.072578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.529 [2024-10-07 11:31:46.072714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.529 [2024-10-07 11:31:46.072739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.529 [2024-10-07 11:31:46.072754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.529 [2024-10-07 11:31:46.072788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.529 [2024-10-07 11:31:46.072804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.529 [2024-10-07 11:31:46.072817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.529 [2024-10-07 11:31:46.072924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.529 [2024-10-07 11:31:46.072944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.529 [2024-10-07 11:31:46.082116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.529 [2024-10-07 11:31:46.082190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.529 [2024-10-07 11:31:46.082272] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.529 [2024-10-07 11:31:46.082331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.529 [2024-10-07 11:31:46.082352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.529 [2024-10-07 11:31:46.082421] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.529 [2024-10-07 11:31:46.082449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.529 [2024-10-07 11:31:46.082465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.529 [2024-10-07 11:31:46.082483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.529 [2024-10-07 11:31:46.083212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.529 [2024-10-07 11:31:46.083252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.529 [2024-10-07 11:31:46.083270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.529 [2024-10-07 11:31:46.083284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.529 [2024-10-07 11:31:46.083470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.529 [2024-10-07 11:31:46.083497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.529 [2024-10-07 11:31:46.083511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.529 [2024-10-07 11:31:46.083525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.529 [2024-10-07 11:31:46.083615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.529 [2024-10-07 11:31:46.092681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.529 [2024-10-07 11:31:46.092732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.529 [2024-10-07 11:31:46.092825] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.529 [2024-10-07 11:31:46.092855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.529 [2024-10-07 11:31:46.092872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.529 [2024-10-07 11:31:46.092920] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.529 [2024-10-07 11:31:46.092943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.529 [2024-10-07 11:31:46.092958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.529 [2024-10-07 11:31:46.093009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.529 [2024-10-07 11:31:46.093033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.529 [2024-10-07 11:31:46.093060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.529 [2024-10-07 11:31:46.093078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.529 [2024-10-07 11:31:46.093093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.529 [2024-10-07 11:31:46.093109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.529 [2024-10-07 11:31:46.093123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.529 [2024-10-07 11:31:46.093136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.529 [2024-10-07 11:31:46.093166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.529 [2024-10-07 11:31:46.093183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.529 [2024-10-07 11:31:46.103176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.529 [2024-10-07 11:31:46.103227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.529 [2024-10-07 11:31:46.103333] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.529 [2024-10-07 11:31:46.103365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.529 [2024-10-07 11:31:46.103382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.529 [2024-10-07 11:31:46.103430] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.529 [2024-10-07 11:31:46.103453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.529 [2024-10-07 11:31:46.103468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.529 [2024-10-07 11:31:46.103500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.529 [2024-10-07 11:31:46.103523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.529 [2024-10-07 11:31:46.103550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.529 [2024-10-07 11:31:46.103568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.529 [2024-10-07 11:31:46.103582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.529 [2024-10-07 11:31:46.103598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.529 [2024-10-07 11:31:46.103612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.529 [2024-10-07 11:31:46.103625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.529 [2024-10-07 11:31:46.103654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.529 [2024-10-07 11:31:46.103671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.529 [2024-10-07 11:31:46.113451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.529 [2024-10-07 11:31:46.113501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.529 [2024-10-07 11:31:46.113834] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.529 [2024-10-07 11:31:46.113878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.529 [2024-10-07 11:31:46.113898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.529 [2024-10-07 11:31:46.113950] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.529 [2024-10-07 11:31:46.113972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.529 [2024-10-07 11:31:46.113988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.529 [2024-10-07 11:31:46.114125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.529 [2024-10-07 11:31:46.114154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.529 [2024-10-07 11:31:46.114257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.529 [2024-10-07 11:31:46.114278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.529 [2024-10-07 11:31:46.114307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.529 [2024-10-07 11:31:46.114343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.529 [2024-10-07 11:31:46.114360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.529 [2024-10-07 11:31:46.114373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.529 [2024-10-07 11:31:46.114414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.529 [2024-10-07 11:31:46.114433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.529 [2024-10-07 11:31:46.123590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.529 [2024-10-07 11:31:46.123664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.529 [2024-10-07 11:31:46.123743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.529 [2024-10-07 11:31:46.123771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.529 [2024-10-07 11:31:46.123788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.529 [2024-10-07 11:31:46.123852] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.529 [2024-10-07 11:31:46.123878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.537 [2024-10-07 11:31:46.123895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.537 [2024-10-07 11:31:46.123913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.538 [2024-10-07 11:31:46.124654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.538 [2024-10-07 11:31:46.124695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.538 [2024-10-07 11:31:46.124713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.538 [2024-10-07 11:31:46.124727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.538 [2024-10-07 11:31:46.124898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.538 [2024-10-07 11:31:46.124922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.538 [2024-10-07 11:31:46.124953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.538 [2024-10-07 11:31:46.124967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.538 [2024-10-07 11:31:46.125058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.538 [2024-10-07 11:31:46.134025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.538 [2024-10-07 11:31:46.134074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.538 [2024-10-07 11:31:46.134167] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.538 [2024-10-07 11:31:46.134197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.538 [2024-10-07 11:31:46.134214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.538 [2024-10-07 11:31:46.134262] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.538 [2024-10-07 11:31:46.134296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.538 [2024-10-07 11:31:46.134314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.538 [2024-10-07 11:31:46.134366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.538 [2024-10-07 11:31:46.134390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.538 [2024-10-07 11:31:46.134417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.538 [2024-10-07 11:31:46.134435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.538 [2024-10-07 11:31:46.134449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.538 [2024-10-07 11:31:46.134465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.538 [2024-10-07 11:31:46.134479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.538 [2024-10-07 11:31:46.134492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.538 [2024-10-07 11:31:46.134521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.538 [2024-10-07 11:31:46.134538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.538 [2024-10-07 11:31:46.144518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.538 [2024-10-07 11:31:46.144569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.538 [2024-10-07 11:31:46.144661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.538 [2024-10-07 11:31:46.144692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.538 [2024-10-07 11:31:46.144709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.538 [2024-10-07 11:31:46.144756] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.538 [2024-10-07 11:31:46.144778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.538 [2024-10-07 11:31:46.144794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.538 [2024-10-07 11:31:46.144825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.538 [2024-10-07 11:31:46.144865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.538 [2024-10-07 11:31:46.144895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.538 [2024-10-07 11:31:46.144913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.538 [2024-10-07 11:31:46.144927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.538 [2024-10-07 11:31:46.144943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.538 [2024-10-07 11:31:46.144957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.538 [2024-10-07 11:31:46.144970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.538 [2024-10-07 11:31:46.145001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.538 [2024-10-07 11:31:46.145018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.538 [2024-10-07 11:31:46.154779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.538 [2024-10-07 11:31:46.154829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.538 [2024-10-07 11:31:46.155153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.538 [2024-10-07 11:31:46.155196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.538 [2024-10-07 11:31:46.155216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.538 [2024-10-07 11:31:46.155268] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.538 [2024-10-07 11:31:46.155291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.538 [2024-10-07 11:31:46.155306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.538 [2024-10-07 11:31:46.155460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.538 [2024-10-07 11:31:46.155490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.538 [2024-10-07 11:31:46.155594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.538 [2024-10-07 11:31:46.155615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.538 [2024-10-07 11:31:46.155630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.538 [2024-10-07 11:31:46.155647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.538 [2024-10-07 11:31:46.155661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.538 [2024-10-07 11:31:46.155675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.538 [2024-10-07 11:31:46.155712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.538 [2024-10-07 11:31:46.155731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.538 [2024-10-07 11:31:46.164898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.538 [2024-10-07 11:31:46.164973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.538 [2024-10-07 11:31:46.165051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.538 [2024-10-07 11:31:46.165078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.538 [2024-10-07 11:31:46.165112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.538 [2024-10-07 11:31:46.165896] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.538 [2024-10-07 11:31:46.165938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.538 [2024-10-07 11:31:46.165957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.538 [2024-10-07 11:31:46.165976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.538 [2024-10-07 11:31:46.166165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.538 [2024-10-07 11:31:46.166195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.538 [2024-10-07 11:31:46.166210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.538 [2024-10-07 11:31:46.166223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.538 [2024-10-07 11:31:46.166349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.538 [2024-10-07 11:31:46.166373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.538 [2024-10-07 11:31:46.166387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.538 [2024-10-07 11:31:46.166401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.538 [2024-10-07 11:31:46.166432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.538 [2024-10-07 11:31:46.175388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.538 [2024-10-07 11:31:46.175438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.538 [2024-10-07 11:31:46.175529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.538 [2024-10-07 11:31:46.175560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.538 [2024-10-07 11:31:46.175577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.538 [2024-10-07 11:31:46.175625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.538 [2024-10-07 11:31:46.175648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.538 [2024-10-07 11:31:46.175663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.538 [2024-10-07 11:31:46.175695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.538 [2024-10-07 11:31:46.175719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.538 [2024-10-07 11:31:46.175746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.538 [2024-10-07 11:31:46.175763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.538 [2024-10-07 11:31:46.175777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.538 [2024-10-07 11:31:46.175793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.538 [2024-10-07 11:31:46.175808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.538 [2024-10-07 11:31:46.175837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.538 [2024-10-07 11:31:46.175871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.538 [2024-10-07 11:31:46.175889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.538 [2024-10-07 11:31:46.185867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.538 [2024-10-07 11:31:46.185919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.538 [2024-10-07 11:31:46.186015] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.538 [2024-10-07 11:31:46.186047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.538 [2024-10-07 11:31:46.186063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.538 [2024-10-07 11:31:46.186113] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.538 [2024-10-07 11:31:46.186136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.538 [2024-10-07 11:31:46.186151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.538 [2024-10-07 11:31:46.186183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.538 [2024-10-07 11:31:46.186206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.538 [2024-10-07 11:31:46.186233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.538 [2024-10-07 11:31:46.186250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.538 [2024-10-07 11:31:46.186264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.538 [2024-10-07 11:31:46.186280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.538 [2024-10-07 11:31:46.186309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.538 [2024-10-07 11:31:46.186349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.538 [2024-10-07 11:31:46.186383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.538 [2024-10-07 11:31:46.186401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.538 [2024-10-07 11:31:46.196033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.538 [2024-10-07 11:31:46.196084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.538 [2024-10-07 11:31:46.196409] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.538 [2024-10-07 11:31:46.196452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.538 [2024-10-07 11:31:46.196472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.538 [2024-10-07 11:31:46.196523] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.538 [2024-10-07 11:31:46.196547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.538 [2024-10-07 11:31:46.196562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.538 [2024-10-07 11:31:46.196689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.538 [2024-10-07 11:31:46.196717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.538 [2024-10-07 11:31:46.196839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.538 [2024-10-07 11:31:46.196860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.538 [2024-10-07 11:31:46.196874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.538 [2024-10-07 11:31:46.196891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.538 [2024-10-07 11:31:46.196905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.538 [2024-10-07 11:31:46.196918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.538 [2024-10-07 11:31:46.196956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.538 [2024-10-07 11:31:46.196975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.538 [2024-10-07 11:31:46.206154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.538 [2024-10-07 11:31:46.206229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.538 [2024-10-07 11:31:46.206333] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.538 [2024-10-07 11:31:46.206365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.538 [2024-10-07 11:31:46.206382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.538 [2024-10-07 11:31:46.207149] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.538 [2024-10-07 11:31:46.207192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.538 [2024-10-07 11:31:46.207211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.538 [2024-10-07 11:31:46.207231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.538 [2024-10-07 11:31:46.207416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.538 [2024-10-07 11:31:46.207446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.538 [2024-10-07 11:31:46.207461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.538 [2024-10-07 11:31:46.207476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.538 [2024-10-07 11:31:46.207568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.538 [2024-10-07 11:31:46.207588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.538 [2024-10-07 11:31:46.207602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.207616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.207662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.216564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.539 [2024-10-07 11:31:46.216615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.539 [2024-10-07 11:31:46.216705] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.539 [2024-10-07 11:31:46.216735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.539 [2024-10-07 11:31:46.216752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.539 [2024-10-07 11:31:46.216819] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.539 [2024-10-07 11:31:46.216843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.539 [2024-10-07 11:31:46.216859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.539 [2024-10-07 11:31:46.216892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.539 [2024-10-07 11:31:46.216915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.539 [2024-10-07 11:31:46.216942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.539 [2024-10-07 11:31:46.216959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.216973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.216989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.539 [2024-10-07 11:31:46.217003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.217016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.217046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.217063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.226996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.539 [2024-10-07 11:31:46.227048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.539 [2024-10-07 11:31:46.227141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.539 [2024-10-07 11:31:46.227171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.539 [2024-10-07 11:31:46.227188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.539 [2024-10-07 11:31:46.227235] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.539 [2024-10-07 11:31:46.227258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.539 [2024-10-07 11:31:46.227274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.539 [2024-10-07 11:31:46.227305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.539 [2024-10-07 11:31:46.227345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.539 [2024-10-07 11:31:46.227374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.539 [2024-10-07 11:31:46.227392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.227406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.227422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.539 [2024-10-07 11:31:46.227436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.227449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.227479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.227509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.237124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.539 [2024-10-07 11:31:46.237198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.539 [2024-10-07 11:31:46.237278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.539 [2024-10-07 11:31:46.237307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.539 [2024-10-07 11:31:46.237339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.539 [2024-10-07 11:31:46.237631] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.539 [2024-10-07 11:31:46.237672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.539 [2024-10-07 11:31:46.237692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.539 [2024-10-07 11:31:46.237711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.539 [2024-10-07 11:31:46.237878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.539 [2024-10-07 11:31:46.237914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.539 [2024-10-07 11:31:46.237931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.237944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.238052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.238072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.539 [2024-10-07 11:31:46.238086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.238099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.238135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.247216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.539 [2024-10-07 11:31:46.247345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.539 [2024-10-07 11:31:46.247377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.539 [2024-10-07 11:31:46.247394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.539 [2024-10-07 11:31:46.247441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.539 [2024-10-07 11:31:46.248185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.539 [2024-10-07 11:31:46.248238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.539 [2024-10-07 11:31:46.248256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.248270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.248452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.248519] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.539 [2024-10-07 11:31:46.248562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.539 [2024-10-07 11:31:46.248580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.539 [2024-10-07 11:31:46.248674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.539 [2024-10-07 11:31:46.248710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.539 [2024-10-07 11:31:46.248728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.248742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.248771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.257573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.539 [2024-10-07 11:31:46.257688] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.539 [2024-10-07 11:31:46.257720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.539 [2024-10-07 11:31:46.257737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.539 [2024-10-07 11:31:46.257768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.539 [2024-10-07 11:31:46.257800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.539 [2024-10-07 11:31:46.257817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.257831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.257862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.258266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.539 [2024-10-07 11:31:46.258386] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.539 [2024-10-07 11:31:46.258416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.539 [2024-10-07 11:31:46.258432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.539 [2024-10-07 11:31:46.258464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.539 [2024-10-07 11:31:46.258495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.539 [2024-10-07 11:31:46.258513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.258527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.259689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.268016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.539 [2024-10-07 11:31:46.268131] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.539 [2024-10-07 11:31:46.268162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.539 [2024-10-07 11:31:46.268179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.539 [2024-10-07 11:31:46.268210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.539 [2024-10-07 11:31:46.268259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.539 [2024-10-07 11:31:46.268279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.268293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.268340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.268395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.539 [2024-10-07 11:31:46.268478] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.539 [2024-10-07 11:31:46.268506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.539 [2024-10-07 11:31:46.268523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.539 [2024-10-07 11:31:46.268553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.539 [2024-10-07 11:31:46.268584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.539 [2024-10-07 11:31:46.268602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.268616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.268645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.278167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.539 [2024-10-07 11:31:46.278294] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.539 [2024-10-07 11:31:46.278341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.539 [2024-10-07 11:31:46.278361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.539 [2024-10-07 11:31:46.278614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.539 [2024-10-07 11:31:46.278791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.539 [2024-10-07 11:31:46.278827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.278845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.278955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.539 [2024-10-07 11:31:46.278980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.279064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.539 [2024-10-07 11:31:46.279093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.539 [2024-10-07 11:31:46.279110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.539 [2024-10-07 11:31:46.279141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.539 [2024-10-07 11:31:46.279173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.539 [2024-10-07 11:31:46.279191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.279205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.279234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.288260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.539 [2024-10-07 11:31:46.288392] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.539 [2024-10-07 11:31:46.288424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.539 [2024-10-07 11:31:46.288441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.539 [2024-10-07 11:31:46.289163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.539 [2024-10-07 11:31:46.289400] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.539 [2024-10-07 11:31:46.289453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.289472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.289569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.539 [2024-10-07 11:31:46.289595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.289675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.539 [2024-10-07 11:31:46.289705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.539 [2024-10-07 11:31:46.289722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.539 [2024-10-07 11:31:46.289753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.539 [2024-10-07 11:31:46.289785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.539 [2024-10-07 11:31:46.289803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.289817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.291091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.299138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.539 [2024-10-07 11:31:46.299332] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.539 [2024-10-07 11:31:46.299394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.539 [2024-10-07 11:31:46.299428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.539 [2024-10-07 11:31:46.299488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.539 [2024-10-07 11:31:46.301100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.539 [2024-10-07 11:31:46.301162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.301195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.301546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.302574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.539 [2024-10-07 11:31:46.303702] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.539 [2024-10-07 11:31:46.303769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.539 [2024-10-07 11:31:46.303829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.539 [2024-10-07 11:31:46.304066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.539 [2024-10-07 11:31:46.304249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.539 [2024-10-07 11:31:46.304296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.539 [2024-10-07 11:31:46.304348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.539 [2024-10-07 11:31:46.305979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.539 [2024-10-07 11:31:46.309883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.310010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.310049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.540 [2024-10-07 11:31:46.310067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.310904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.311147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.311184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.311202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.311294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.312692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.312804] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.312836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.540 [2024-10-07 11:31:46.312853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.312887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.312919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.312937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.312951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.312981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.320277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.320456] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.320500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.540 [2024-10-07 11:31:46.320528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.320576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.320622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.320651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.320705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.320755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.323496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.323671] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.323739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.540 [2024-10-07 11:31:46.323770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.324793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.325038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.325078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.325096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.326035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.330656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.330778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.330811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.540 [2024-10-07 11:31:46.330828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.330860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.330893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.330911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.330925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.330956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.333643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.333759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.333791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.540 [2024-10-07 11:31:46.333808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.333840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.333872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.333890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.333904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.333935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.340750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.340885] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.340917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.540 [2024-10-07 11:31:46.340935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.342101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.342385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.342424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.342442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.343168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.344394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.344504] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.344535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.540 [2024-10-07 11:31:46.344552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.344584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.344616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.344635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.344649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.344679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.350856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.350971] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.351002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.540 [2024-10-07 11:31:46.351027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.351059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.351091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.351109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.351123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.351154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.354486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.354599] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.354629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.540 [2024-10-07 11:31:46.354646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.355827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.356091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.356128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.356145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.356888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.361217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.361356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.361389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.540 [2024-10-07 11:31:46.361406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.361439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.361471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.361489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.361503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.361534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.364574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.364686] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.364717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.540 [2024-10-07 11:31:46.364734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.364765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.364797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.364816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.364830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.364859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.372040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.372156] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.372187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.540 [2024-10-07 11:31:46.372204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.372251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.372289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.372308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.372368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.372407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.375060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.375183] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.375214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.540 [2024-10-07 11:31:46.375231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.375264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.375295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.375327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.375345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.375386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.382134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.382251] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.382283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.540 [2024-10-07 11:31:46.382339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.383509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.383739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.383775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.383793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.384535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.385905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.386015] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.386047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.540 [2024-10-07 11:31:46.386065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.386097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.386129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.386148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.386162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.386192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.392224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.392367] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.392416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.540 [2024-10-07 11:31:46.392436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.392469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.392501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.392519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.392534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.392564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.395991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.396108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.396140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.540 [2024-10-07 11:31:46.396158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.396190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.396222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.396240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.396255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.397440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.402351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.402465] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.402497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.540 [2024-10-07 11:31:46.402515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.402766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.402926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.402960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.402978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.403086] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.540 [2024-10-07 11:31:46.406091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.540 [2024-10-07 11:31:46.406213] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.540 [2024-10-07 11:31:46.406245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.540 [2024-10-07 11:31:46.406262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.540 [2024-10-07 11:31:46.406309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.540 [2024-10-07 11:31:46.406392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.540 [2024-10-07 11:31:46.406425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.540 [2024-10-07 11:31:46.406441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.540 [2024-10-07 11:31:46.406473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 [2024-10-07 11:31:46.412443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.412561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.412592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.541 [2024-10-07 11:31:46.412610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.412642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.412674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.541 [2024-10-07 11:31:46.412692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.541 [2024-10-07 11:31:46.412706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.541 [2024-10-07 11:31:46.412736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 [2024-10-07 11:31:46.416244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.416372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.416404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.541 [2024-10-07 11:31:46.416421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.416676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.416835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.541 [2024-10-07 11:31:46.416870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.541 [2024-10-07 11:31:46.416887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.541 [2024-10-07 11:31:46.416995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 [2024-10-07 11:31:46.423124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.423240] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.423271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.541 [2024-10-07 11:31:46.423289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.423334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.423370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.541 [2024-10-07 11:31:46.423388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.541 [2024-10-07 11:31:46.423402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.541 [2024-10-07 11:31:46.423432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 [2024-10-07 11:31:46.426352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.426465] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.426497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.541 [2024-10-07 11:31:46.426514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.426546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.426578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.541 [2024-10-07 11:31:46.426596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.541 [2024-10-07 11:31:46.426610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.541 [2024-10-07 11:31:46.426640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 [2024-10-07 11:31:46.433654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.433769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.433800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.541 [2024-10-07 11:31:46.433817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.433850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.433882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.541 [2024-10-07 11:31:46.433900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.541 [2024-10-07 11:31:46.433914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.541 [2024-10-07 11:31:46.433944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 [2024-10-07 11:31:46.436964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.437077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.437108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.541 [2024-10-07 11:31:46.437125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.437157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.437189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.541 [2024-10-07 11:31:46.437207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.541 [2024-10-07 11:31:46.437222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.541 [2024-10-07 11:31:46.437252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 [2024-10-07 11:31:46.443983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.444106] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.444137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.541 [2024-10-07 11:31:46.444169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.444440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.444602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.541 [2024-10-07 11:31:46.444636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.541 [2024-10-07 11:31:46.444653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.541 [2024-10-07 11:31:46.444762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 [2024-10-07 11:31:46.447531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.447641] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.447672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.541 [2024-10-07 11:31:46.447688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.447720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.447752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.541 [2024-10-07 11:31:46.447770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.541 [2024-10-07 11:31:46.447784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.541 [2024-10-07 11:31:46.447814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 [2024-10-07 11:31:46.454074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.454188] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.454219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.541 [2024-10-07 11:31:46.454236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.454277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.454343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.541 [2024-10-07 11:31:46.454364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.541 [2024-10-07 11:31:46.454379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.541 [2024-10-07 11:31:46.454409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 [2024-10-07 11:31:46.457813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.457926] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.457957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.541 [2024-10-07 11:31:46.457974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.458251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.458431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.541 [2024-10-07 11:31:46.458484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.541 [2024-10-07 11:31:46.458504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.541 [2024-10-07 11:31:46.458613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 [2024-10-07 11:31:46.464646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.464761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.464793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.541 [2024-10-07 11:31:46.464810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.464842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.464874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.541 [2024-10-07 11:31:46.464892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.541 [2024-10-07 11:31:46.464906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.541 [2024-10-07 11:31:46.464937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 [2024-10-07 11:31:46.467902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.468014] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.468045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.541 [2024-10-07 11:31:46.468062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.468094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.468125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.541 [2024-10-07 11:31:46.468143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.541 [2024-10-07 11:31:46.468158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.541 [2024-10-07 11:31:46.468187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 [2024-10-07 11:31:46.475153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.475266] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.475297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.541 [2024-10-07 11:31:46.475328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.475364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.475396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.541 [2024-10-07 11:31:46.475414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.541 [2024-10-07 11:31:46.475428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.541 [2024-10-07 11:31:46.475459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 [2024-10-07 11:31:46.478440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.478569] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.478601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.541 [2024-10-07 11:31:46.478618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.478650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.478682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.541 [2024-10-07 11:31:46.478706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.541 [2024-10-07 11:31:46.478722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.541 [2024-10-07 11:31:46.478752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 [2024-10-07 11:31:46.485395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.485512] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.485543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.541 [2024-10-07 11:31:46.485560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.485811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.485970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.541 [2024-10-07 11:31:46.486004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.541 [2024-10-07 11:31:46.486021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.541 [2024-10-07 11:31:46.486128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 [2024-10-07 11:31:46.488922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.489033] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.489064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.541 [2024-10-07 11:31:46.489081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.489113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.489145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.541 [2024-10-07 11:31:46.489163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.541 [2024-10-07 11:31:46.489177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.541 [2024-10-07 11:31:46.489207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 [2024-10-07 11:31:46.495486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.495600] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.495632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.541 [2024-10-07 11:31:46.495649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.495700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.496444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.541 [2024-10-07 11:31:46.496482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.541 [2024-10-07 11:31:46.496500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.541 [2024-10-07 11:31:46.496694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 8851.30 IOPS, 34.58 MiB/s [2024-10-07T11:31:53.064Z] [2024-10-07 11:31:46.503055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.503257] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.503301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.541 [2024-10-07 11:31:46.503336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.503372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.503405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.541 [2024-10-07 11:31:46.503424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.541 [2024-10-07 11:31:46.503438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.541 [2024-10-07 11:31:46.503469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.541 [2024-10-07 11:31:46.505915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.541 [2024-10-07 11:31:46.506036] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.541 [2024-10-07 11:31:46.506068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.541 [2024-10-07 11:31:46.506085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.541 [2024-10-07 11:31:46.506117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.541 [2024-10-07 11:31:46.506149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.506167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.506181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.506211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.513420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.513542] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.513574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.542 [2024-10-07 11:31:46.513591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.513623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.513655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.513673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.513704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.513737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.516403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.516515] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.516546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.542 [2024-10-07 11:31:46.516563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.516595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.516627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.516645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.516659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.516689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.524122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.524237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.524268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.542 [2024-10-07 11:31:46.524285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.524331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.524368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.524386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.524400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.524430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.526610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.526721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.526753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.542 [2024-10-07 11:31:46.526770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.527021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.527184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.527219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.527236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.527360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.534218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.534349] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.534397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.542 [2024-10-07 11:31:46.534417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.535584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.535814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.535849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.535866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.536604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.536765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.536868] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.536908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.542 [2024-10-07 11:31:46.536926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.537666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.537863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.537899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.537916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.538009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.544308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.544432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.544463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.542 [2024-10-07 11:31:46.544480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.544512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.544544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.544562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.544576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.544606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.547127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.547240] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.547272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.542 [2024-10-07 11:31:46.547289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.547335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.547397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.547416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.547431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.547462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.554683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.554808] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.554840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.542 [2024-10-07 11:31:46.554858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.554890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.554922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.554940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.554955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.554985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.557677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.557786] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.557817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.542 [2024-10-07 11:31:46.557834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.557866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.557899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.557917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.557930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.557964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.565486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.565743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.565788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.542 [2024-10-07 11:31:46.565809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.565909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.565954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.565975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.565989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.566036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.568234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.568357] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.568389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.542 [2024-10-07 11:31:46.568407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.568665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.568824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.568859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.568877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.568985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.575577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.575690] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.575722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.542 [2024-10-07 11:31:46.575739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.575771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.575804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.575822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.575836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.575866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.578356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.578466] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.578497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.542 [2024-10-07 11:31:46.578514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.578546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.578578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.578596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.578610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.578639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.585685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.585797] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.585828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.542 [2024-10-07 11:31:46.585872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.585905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.585938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.585956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.585970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.586001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.588999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.589112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.589143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.542 [2024-10-07 11:31:46.589160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.589191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.589224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.589242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.589259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.589289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.595956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.596070] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.596102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.542 [2024-10-07 11:31:46.596119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.596391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.596541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.596576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.596594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.596701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.599538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.599648] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.599680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.542 [2024-10-07 11:31:46.599698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.599730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.599765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.599799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.599814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.599846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.606047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.606167] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.606199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.542 [2024-10-07 11:31:46.606217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.542 [2024-10-07 11:31:46.606249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.542 [2024-10-07 11:31:46.606281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.542 [2024-10-07 11:31:46.606314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.542 [2024-10-07 11:31:46.606348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.542 [2024-10-07 11:31:46.606381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.542 [2024-10-07 11:31:46.609928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.542 [2024-10-07 11:31:46.610038] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.542 [2024-10-07 11:31:46.610069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.543 [2024-10-07 11:31:46.610086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.610364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.610515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.610551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.610568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.610677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.616771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.616887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.616919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.543 [2024-10-07 11:31:46.616936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.616968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.617000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.617017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.617031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.617062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.620015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.620145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.620176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.543 [2024-10-07 11:31:46.620193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.620225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.620257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.620275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.620289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.620336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.627262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.627392] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.627425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.543 [2024-10-07 11:31:46.627442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.627474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.627506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.627524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.627539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.627569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.630562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.630675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.630706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.543 [2024-10-07 11:31:46.630723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.630755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.630787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.630805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.630819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.630853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.637489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.637610] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.637641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.543 [2024-10-07 11:31:46.637658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.637928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.638087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.638122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.638140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.638246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.640985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.641094] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.641125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.543 [2024-10-07 11:31:46.641142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.641173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.641205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.641224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.641238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.641268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.647581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.647694] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.647724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.543 [2024-10-07 11:31:46.647742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.648482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.648673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.648700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.648715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.648806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.651160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.651271] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.651303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.543 [2024-10-07 11:31:46.651335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.651589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.651755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.651790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.651824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.651934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.657902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.658014] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.658046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.543 [2024-10-07 11:31:46.658063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.658095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.658127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.658145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.658159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.658189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.661249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.661371] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.661403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.543 [2024-10-07 11:31:46.661420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.662145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.662375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.662403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.662420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.662513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.668271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.668397] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.668429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.543 [2024-10-07 11:31:46.668446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.668478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.668510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.668529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.668543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.668572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.671532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.671645] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.671691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.543 [2024-10-07 11:31:46.671710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.671744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.671776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.671794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.671808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.671838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.678437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.678549] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.678580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.543 [2024-10-07 11:31:46.678597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.678854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.679019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.679057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.679075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.679182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.681914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.682022] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.682052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.543 [2024-10-07 11:31:46.682071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.682102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.682135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.682153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.682167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.682197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.688527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.688637] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.688668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.543 [2024-10-07 11:31:46.688686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.689423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.689631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.689658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.689673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.689765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.692085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.692441] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.692485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.543 [2024-10-07 11:31:46.692505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.692661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.692777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.692799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.692813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.692852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.698780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.698892] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.698923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.543 [2024-10-07 11:31:46.698940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.698972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.699005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.699023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.699037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.699067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.702171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.702279] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.702339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.543 [2024-10-07 11:31:46.702359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.703083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.703273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.703300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.703330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.703444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.709145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.709265] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.543 [2024-10-07 11:31:46.709297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.543 [2024-10-07 11:31:46.709327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.543 [2024-10-07 11:31:46.709363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.543 [2024-10-07 11:31:46.709396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.543 [2024-10-07 11:31:46.709414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.543 [2024-10-07 11:31:46.709428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.543 [2024-10-07 11:31:46.709459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.543 [2024-10-07 11:31:46.712417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.543 [2024-10-07 11:31:46.712528] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.712560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.544 [2024-10-07 11:31:46.712577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.712609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.712642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.712660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.544 [2024-10-07 11:31:46.712674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.544 [2024-10-07 11:31:46.712705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.544 [2024-10-07 11:31:46.719347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.544 [2024-10-07 11:31:46.719462] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.719493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.544 [2024-10-07 11:31:46.719511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.719764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.719923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.719958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.544 [2024-10-07 11:31:46.719976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.544 [2024-10-07 11:31:46.720084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.544 [2024-10-07 11:31:46.722849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.544 [2024-10-07 11:31:46.722960] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.722991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.544 [2024-10-07 11:31:46.723024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.723058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.723090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.723108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.544 [2024-10-07 11:31:46.723122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.544 [2024-10-07 11:31:46.723153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.544 [2024-10-07 11:31:46.729439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.544 [2024-10-07 11:31:46.729552] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.729584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.544 [2024-10-07 11:31:46.729601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.730353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.730545] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.730572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.544 [2024-10-07 11:31:46.730587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.544 [2024-10-07 11:31:46.730679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.544 [2024-10-07 11:31:46.733006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.544 [2024-10-07 11:31:46.733118] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.733148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.544 [2024-10-07 11:31:46.733166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.733431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.733579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.733614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.544 [2024-10-07 11:31:46.733631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.544 [2024-10-07 11:31:46.733739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.544 [2024-10-07 11:31:46.739759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.544 [2024-10-07 11:31:46.739880] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.739911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.544 [2024-10-07 11:31:46.739929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.739960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.739993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.740028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.544 [2024-10-07 11:31:46.740044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.544 [2024-10-07 11:31:46.740075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.544 [2024-10-07 11:31:46.743090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.544 [2024-10-07 11:31:46.743209] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.743240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.544 [2024-10-07 11:31:46.743257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.743288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.744027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.744065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.544 [2024-10-07 11:31:46.744083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.544 [2024-10-07 11:31:46.744272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.544 [2024-10-07 11:31:46.750184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.544 [2024-10-07 11:31:46.750311] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.750355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.544 [2024-10-07 11:31:46.750373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.750406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.750439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.750457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.544 [2024-10-07 11:31:46.750471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.544 [2024-10-07 11:31:46.750501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.544 [2024-10-07 11:31:46.753493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.544 [2024-10-07 11:31:46.753603] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.753634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.544 [2024-10-07 11:31:46.753651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.753683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.753715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.753733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.544 [2024-10-07 11:31:46.753747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.544 [2024-10-07 11:31:46.753777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.544 [2024-10-07 11:31:46.760610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.544 [2024-10-07 11:31:46.760741] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.760772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.544 [2024-10-07 11:31:46.760790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.761042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.761189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.761214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.544 [2024-10-07 11:31:46.761229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.544 [2024-10-07 11:31:46.761350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.544 [2024-10-07 11:31:46.764191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.544 [2024-10-07 11:31:46.764301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.764345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.544 [2024-10-07 11:31:46.764364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.764396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.764428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.764447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.544 [2024-10-07 11:31:46.764461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.544 [2024-10-07 11:31:46.764490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.544 [2024-10-07 11:31:46.770717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.544 [2024-10-07 11:31:46.770829] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.770864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.544 [2024-10-07 11:31:46.770881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.770913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.770944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.770963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.544 [2024-10-07 11:31:46.770977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.544 [2024-10-07 11:31:46.771007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.544 [2024-10-07 11:31:46.774570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.544 [2024-10-07 11:31:46.774681] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.774712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.544 [2024-10-07 11:31:46.774731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.775000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.775149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.775186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.544 [2024-10-07 11:31:46.775204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.544 [2024-10-07 11:31:46.775312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.544 [2024-10-07 11:31:46.781387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.544 [2024-10-07 11:31:46.781502] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.781534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.544 [2024-10-07 11:31:46.781551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.781583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.781615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.781633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.544 [2024-10-07 11:31:46.781648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.544 [2024-10-07 11:31:46.781678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.544 [2024-10-07 11:31:46.784663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.544 [2024-10-07 11:31:46.784774] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.784805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.544 [2024-10-07 11:31:46.784822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.784853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.784885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.784903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.544 [2024-10-07 11:31:46.784917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.544 [2024-10-07 11:31:46.784947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.544 [2024-10-07 11:31:46.791899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.544 [2024-10-07 11:31:46.792014] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.792045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.544 [2024-10-07 11:31:46.792063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.792095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.792127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.792145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.544 [2024-10-07 11:31:46.792177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.544 [2024-10-07 11:31:46.792211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.544 [2024-10-07 11:31:46.795233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.544 [2024-10-07 11:31:46.795360] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.795393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.544 [2024-10-07 11:31:46.795409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.795442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.795474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.795492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.544 [2024-10-07 11:31:46.795506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.544 [2024-10-07 11:31:46.795536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.544 [2024-10-07 11:31:46.802170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.544 [2024-10-07 11:31:46.802281] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.802338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.544 [2024-10-07 11:31:46.802358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.802611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.802759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.802795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.544 [2024-10-07 11:31:46.802813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.544 [2024-10-07 11:31:46.802921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.544 [2024-10-07 11:31:46.805711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.544 [2024-10-07 11:31:46.805819] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.544 [2024-10-07 11:31:46.805849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.544 [2024-10-07 11:31:46.805866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.544 [2024-10-07 11:31:46.805898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.544 [2024-10-07 11:31:46.805930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.544 [2024-10-07 11:31:46.805948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.805963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.805993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.812258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.812385] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.812432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.545 [2024-10-07 11:31:46.812451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.812484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.812516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.812534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.812548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.812578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.816022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.816134] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.816165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.545 [2024-10-07 11:31:46.816183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.816466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.816616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.816651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.816669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.816777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.822857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.822972] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.823005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.545 [2024-10-07 11:31:46.823022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.823054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.823086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.823105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.823119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.823149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.826111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.826223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.826255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.545 [2024-10-07 11:31:46.826272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.826332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.826388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.826407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.826421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.827144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.833368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.833487] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.833519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.545 [2024-10-07 11:31:46.833537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.833569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.833601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.833619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.833633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.833663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.836598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.836712] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.836743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.545 [2024-10-07 11:31:46.836761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.836792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.836825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.836845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.836860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.836890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.843509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.843624] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.843655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.545 [2024-10-07 11:31:46.843673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.843924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.844058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.844092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.844110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.844239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.847103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.847217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.847249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.545 [2024-10-07 11:31:46.847267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.847298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.847346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.847366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.847381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.847411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.853599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.853711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.853743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.545 [2024-10-07 11:31:46.853760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.853792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.853824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.853842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.853856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.854614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.857228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.857351] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.857383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.545 [2024-10-07 11:31:46.857400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.857652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.857786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.857820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.857837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.857945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.864049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.864164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.864195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.545 [2024-10-07 11:31:46.864231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.864264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.864297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.864329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.864346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.864377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.867333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.867445] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.867476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.545 [2024-10-07 11:31:46.867493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.867524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.867556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.867574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.867589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.868338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.874540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.874654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.874685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.545 [2024-10-07 11:31:46.874702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.874734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.874766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.874784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.874798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.874828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.877799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.877911] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.877942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.545 [2024-10-07 11:31:46.877959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.877990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.878022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.878057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.878073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.878105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.884729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.884841] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.884873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.545 [2024-10-07 11:31:46.884890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.885141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.885275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.885308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.885341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.885450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.888290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.888410] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.888441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.545 [2024-10-07 11:31:46.888458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.888489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.888521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.888538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.888553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.888583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.894819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.894932] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.894963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.545 [2024-10-07 11:31:46.894981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.895012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.895043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.895061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.895075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.895813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.898458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.898572] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.898604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.545 [2024-10-07 11:31:46.898620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.898872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.899018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.899050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.899067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.899175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.905279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.905405] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.905436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.545 [2024-10-07 11:31:46.905454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.905485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.905516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.545 [2024-10-07 11:31:46.905534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.545 [2024-10-07 11:31:46.905548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.545 [2024-10-07 11:31:46.905579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.545 [2024-10-07 11:31:46.908547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.545 [2024-10-07 11:31:46.908656] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.545 [2024-10-07 11:31:46.908687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.545 [2024-10-07 11:31:46.908704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.545 [2024-10-07 11:31:46.908736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.545 [2024-10-07 11:31:46.908767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:46.908785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:46.908799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:46.909536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:46.915759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:46.915881] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:46.915913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.546 [2024-10-07 11:31:46.915931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.546 [2024-10-07 11:31:46.915982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.546 [2024-10-07 11:31:46.916014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:46.916033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:46.916047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:46.916077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:46.918990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:46.919105] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:46.919136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.546 [2024-10-07 11:31:46.919154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.546 [2024-10-07 11:31:46.919186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.546 [2024-10-07 11:31:46.919218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:46.919236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:46.919250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:46.919280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:46.925947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:46.926066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:46.926099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.546 [2024-10-07 11:31:46.926116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.546 [2024-10-07 11:31:46.926411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.546 [2024-10-07 11:31:46.926564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:46.926600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:46.926618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:46.926727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:46.929492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:46.929602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:46.929633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.546 [2024-10-07 11:31:46.929650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.546 [2024-10-07 11:31:46.929682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.546 [2024-10-07 11:31:46.929713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:46.929731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:46.929767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:46.929800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:46.936038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:46.936153] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:46.936184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.546 [2024-10-07 11:31:46.936201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.546 [2024-10-07 11:31:46.936233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.546 [2024-10-07 11:31:46.936265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:46.936283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:46.936297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:46.937039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:46.939726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:46.939837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:46.939868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.546 [2024-10-07 11:31:46.939885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.546 [2024-10-07 11:31:46.940141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.546 [2024-10-07 11:31:46.940289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:46.940336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:46.940356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:46.940465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:46.946488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:46.946602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:46.946633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.546 [2024-10-07 11:31:46.946651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.546 [2024-10-07 11:31:46.946682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.546 [2024-10-07 11:31:46.946714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:46.946731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:46.946746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:46.946776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:46.949815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:46.949925] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:46.949977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.546 [2024-10-07 11:31:46.949995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.546 [2024-10-07 11:31:46.950028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.546 [2024-10-07 11:31:46.950789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:46.950828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:46.950845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:46.951017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:46.956934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:46.957047] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:46.957078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.546 [2024-10-07 11:31:46.957095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.546 [2024-10-07 11:31:46.957126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.546 [2024-10-07 11:31:46.957158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:46.957176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:46.957190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:46.957220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:46.960207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:46.960331] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:46.960362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.546 [2024-10-07 11:31:46.960380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.546 [2024-10-07 11:31:46.960413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.546 [2024-10-07 11:31:46.960444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:46.960462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:46.960476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:46.960507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:46.967101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:46.967214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:46.967245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.546 [2024-10-07 11:31:46.967262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.546 [2024-10-07 11:31:46.967528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.546 [2024-10-07 11:31:46.967696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:46.967734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:46.967752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:46.967859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:46.970632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:46.970745] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:46.970775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.546 [2024-10-07 11:31:46.970792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.546 [2024-10-07 11:31:46.970824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.546 [2024-10-07 11:31:46.970856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:46.970873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:46.970888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:46.970917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:46.977190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:46.977302] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:46.977349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.546 [2024-10-07 11:31:46.977367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.546 [2024-10-07 11:31:46.977400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.546 [2024-10-07 11:31:46.977432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:46.977450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:46.977464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:46.978186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:46.980834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:46.980946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:46.980977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.546 [2024-10-07 11:31:46.980994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.546 [2024-10-07 11:31:46.981260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.546 [2024-10-07 11:31:46.981437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:46.981472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:46.981490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:46.981616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:46.987584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:46.987699] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:46.987731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.546 [2024-10-07 11:31:46.987748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.546 [2024-10-07 11:31:46.987780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.546 [2024-10-07 11:31:46.987812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:46.987830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:46.987845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:46.987875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:46.990917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:46.991028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:46.991059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.546 [2024-10-07 11:31:46.991076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.546 [2024-10-07 11:31:46.991107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.546 [2024-10-07 11:31:46.991860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:46.991899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:46.991918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:46.992087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:46.997988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:46.998099] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:46.998129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.546 [2024-10-07 11:31:46.998147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.546 [2024-10-07 11:31:46.998178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.546 [2024-10-07 11:31:46.998210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:46.998228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:46.998242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:46.998272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:47.001231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:47.001356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:47.001388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.546 [2024-10-07 11:31:47.001423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.546 [2024-10-07 11:31:47.001457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.546 [2024-10-07 11:31:47.001489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.546 [2024-10-07 11:31:47.001508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.546 [2024-10-07 11:31:47.001522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.546 [2024-10-07 11:31:47.001552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.546 [2024-10-07 11:31:47.008179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.546 [2024-10-07 11:31:47.008294] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.546 [2024-10-07 11:31:47.008339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.546 [2024-10-07 11:31:47.008359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.008611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.008770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.008806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.008823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.008931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.011702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.011812] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.011843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.547 [2024-10-07 11:31:47.011860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.011892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.011923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.011942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.011955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.011986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.018268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.018413] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.018445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.547 [2024-10-07 11:31:47.018463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.018495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.019220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.019274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.019293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.019499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.021887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.021998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.022029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.547 [2024-10-07 11:31:47.022046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.022310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.022491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.022523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.022540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.022647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.028650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.028765] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.028796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.547 [2024-10-07 11:31:47.028815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.028847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.028879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.028897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.028911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.028941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.031971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.032081] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.032112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.547 [2024-10-07 11:31:47.032129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.032160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.032192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.032209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.032223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.032960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.039083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.039197] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.039228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.547 [2024-10-07 11:31:47.039246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.039278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.039310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.039345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.039361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.039392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.042440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.042556] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.042587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.547 [2024-10-07 11:31:47.042604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.042636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.042669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.042687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.042702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.042733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.049367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.049468] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.049499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.547 [2024-10-07 11:31:47.049516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.049547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.049580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.049598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.049612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.049861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.053091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.053202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.053233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.547 [2024-10-07 11:31:47.053250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.053304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.053355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.053374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.053389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.053420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.059515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.059628] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.059660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.547 [2024-10-07 11:31:47.059678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.059709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.059741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.059760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.059775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.059811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.063388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.063498] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.063529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.547 [2024-10-07 11:31:47.063547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.063797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.063952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.063978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.063993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.064100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.070305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.070440] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.070472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.547 [2024-10-07 11:31:47.070489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.070521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.070553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.070572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.070604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.070637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.073481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.073589] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.073620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.547 [2024-10-07 11:31:47.073637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.073668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.073700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.073718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.073733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.073763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.080937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.081053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.081085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.547 [2024-10-07 11:31:47.081102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.081134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.081167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.081185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.081199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.081228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.084264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.084391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.084423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.547 [2024-10-07 11:31:47.084440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.084471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.084503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.084521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.084535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.084566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.091195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.091302] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.091357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.547 [2024-10-07 11:31:47.091384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.091638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.091794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.091819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.091834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.091940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.094907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.095020] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.095052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.547 [2024-10-07 11:31:47.095069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.095101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.095133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.095151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.095164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.095194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.101345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.101460] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.101491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.547 [2024-10-07 11:31:47.101508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.101540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.101572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.101590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.101605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.101634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.105170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.105285] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.105330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.547 [2024-10-07 11:31:47.105349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.547 [2024-10-07 11:31:47.105602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.547 [2024-10-07 11:31:47.105757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.547 [2024-10-07 11:31:47.105781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.547 [2024-10-07 11:31:47.105795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.547 [2024-10-07 11:31:47.105905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.547 [2024-10-07 11:31:47.112120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.547 [2024-10-07 11:31:47.112233] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.547 [2024-10-07 11:31:47.112265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.547 [2024-10-07 11:31:47.112282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.112330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.112367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.112386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.112400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.112438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.548 [2024-10-07 11:31:47.115278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.548 [2024-10-07 11:31:47.115401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.548 [2024-10-07 11:31:47.115433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.548 [2024-10-07 11:31:47.115450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.115482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.115514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.115532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.115546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.115576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.548 [2024-10-07 11:31:47.122696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.548 [2024-10-07 11:31:47.122820] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.548 [2024-10-07 11:31:47.122861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.548 [2024-10-07 11:31:47.122879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.122911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.122943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.122962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.122976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.123024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.548 [2024-10-07 11:31:47.126096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.548 [2024-10-07 11:31:47.126207] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.548 [2024-10-07 11:31:47.126238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.548 [2024-10-07 11:31:47.126255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.126300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.126350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.126370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.126384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.126415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.548 [2024-10-07 11:31:47.133005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.548 [2024-10-07 11:31:47.133120] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.548 [2024-10-07 11:31:47.133151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.548 [2024-10-07 11:31:47.133169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.133440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.133587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.133620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.133636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.133743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.548 [2024-10-07 11:31:47.136633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.548 [2024-10-07 11:31:47.136743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.548 [2024-10-07 11:31:47.136774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.548 [2024-10-07 11:31:47.136791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.136823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.136855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.136873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.136887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.136917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.548 [2024-10-07 11:31:47.143098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.548 [2024-10-07 11:31:47.143211] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.548 [2024-10-07 11:31:47.143242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.548 [2024-10-07 11:31:47.143277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.143311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.143362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.143381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.143396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.143427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.548 [2024-10-07 11:31:47.146928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.548 [2024-10-07 11:31:47.147041] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.548 [2024-10-07 11:31:47.147072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.548 [2024-10-07 11:31:47.147090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.147374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.147541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.147575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.147592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.147700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.548 [2024-10-07 11:31:47.153816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.548 [2024-10-07 11:31:47.153927] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.548 [2024-10-07 11:31:47.153958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.548 [2024-10-07 11:31:47.153976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.154007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.154039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.154057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.154071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.154101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.548 [2024-10-07 11:31:47.157019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.548 [2024-10-07 11:31:47.157128] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.548 [2024-10-07 11:31:47.157159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.548 [2024-10-07 11:31:47.157176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.157207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.157239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.157273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.157287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.157334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.548 [2024-10-07 11:31:47.164394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.548 [2024-10-07 11:31:47.164507] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.548 [2024-10-07 11:31:47.164539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.548 [2024-10-07 11:31:47.164557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.164588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.164620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.164638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.164652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.164682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.548 [2024-10-07 11:31:47.167704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.548 [2024-10-07 11:31:47.167816] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.548 [2024-10-07 11:31:47.167848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.548 [2024-10-07 11:31:47.167865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.167896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.167928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.167946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.167959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.167989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.548 [2024-10-07 11:31:47.174750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.548 [2024-10-07 11:31:47.174866] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.548 [2024-10-07 11:31:47.174898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.548 [2024-10-07 11:31:47.174915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.175166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.175312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.175368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.175386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.175494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.548 [2024-10-07 11:31:47.178341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.548 [2024-10-07 11:31:47.178452] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.548 [2024-10-07 11:31:47.178483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.548 [2024-10-07 11:31:47.178500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.178532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.178564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.178582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.178596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.178626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.548 [2024-10-07 11:31:47.184840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.548 [2024-10-07 11:31:47.184952] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.548 [2024-10-07 11:31:47.184983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.548 [2024-10-07 11:31:47.185000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.185031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.185064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.185081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.185095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.185125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.548 [2024-10-07 11:31:47.188588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.548 [2024-10-07 11:31:47.188700] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.548 [2024-10-07 11:31:47.188731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.548 [2024-10-07 11:31:47.188748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.188998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.189157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.189193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.189210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.189330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.548 [2024-10-07 11:31:47.195401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.548 [2024-10-07 11:31:47.195512] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.548 [2024-10-07 11:31:47.195544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.548 [2024-10-07 11:31:47.195562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.195612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.195645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.195664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.195678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.195708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.548 [2024-10-07 11:31:47.198678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.548 [2024-10-07 11:31:47.198789] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.548 [2024-10-07 11:31:47.198820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.548 [2024-10-07 11:31:47.198837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.198869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.198901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.198919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.198933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.198963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.548 [2024-10-07 11:31:47.205994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.548 [2024-10-07 11:31:47.206108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.548 [2024-10-07 11:31:47.206139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.548 [2024-10-07 11:31:47.206156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.548 [2024-10-07 11:31:47.206188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.548 [2024-10-07 11:31:47.206220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.548 [2024-10-07 11:31:47.206238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.548 [2024-10-07 11:31:47.206253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.548 [2024-10-07 11:31:47.206283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.209475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.209585] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.209616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.549 [2024-10-07 11:31:47.209633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.209665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.209696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.209714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.209747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.209780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.216480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.216593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.216624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.549 [2024-10-07 11:31:47.216642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.216893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.217041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.217076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.217093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.217201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.220097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.220208] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.220238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.549 [2024-10-07 11:31:47.220255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.220287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.220334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.220355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.220369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.220401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.226570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.226690] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.226721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.549 [2024-10-07 11:31:47.226739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.226771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.226802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.226821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.226835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.226864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.230408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.230540] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.230572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.549 [2024-10-07 11:31:47.230590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.230852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.230999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.231035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.231053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.231160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.237287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.237411] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.237443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.549 [2024-10-07 11:31:47.237460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.237491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.237523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.237541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.237556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.237586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.240515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.240626] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.240656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.549 [2024-10-07 11:31:47.240674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.240705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.240737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.240755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.240769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.240798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.247767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.247882] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.247913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.549 [2024-10-07 11:31:47.247930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.247962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.248013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.248033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.248047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.248077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.251068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.251180] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.251212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.549 [2024-10-07 11:31:47.251229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.251261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.251293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.251310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.251343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.251377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.258031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.258145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.258177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.549 [2024-10-07 11:31:47.258195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.258499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.258651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.258687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.258706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.258813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.261579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.261688] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.261719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.549 [2024-10-07 11:31:47.261736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.261767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.261799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.261817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.261831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.261877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.268123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.268237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.268268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.549 [2024-10-07 11:31:47.268286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.268333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.268369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.268388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.268401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.268431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.271869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.271983] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.272013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.549 [2024-10-07 11:31:47.272031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.272282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.272443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.272477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.272495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.272602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.278711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.278825] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.278857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.549 [2024-10-07 11:31:47.278875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.278907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.278938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.278957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.278971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.279000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.281962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.282080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.282112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.549 [2024-10-07 11:31:47.282145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.282179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.282212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.282230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.282243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.282999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.289188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.289302] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.289347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.549 [2024-10-07 11:31:47.289365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.289397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.289430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.289448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.289462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.289492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.292479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.292593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.292632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.549 [2024-10-07 11:31:47.292649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.292681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.292713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.292731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.292745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.292777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.299362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.299476] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.299508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.549 [2024-10-07 11:31:47.299525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.299776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.299922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.299970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.299988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.300099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.302948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.303061] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.303092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.549 [2024-10-07 11:31:47.303109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.303141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.303173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.303190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.303205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.549 [2024-10-07 11:31:47.303236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.549 [2024-10-07 11:31:47.309452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.549 [2024-10-07 11:31:47.309564] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.549 [2024-10-07 11:31:47.309596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.549 [2024-10-07 11:31:47.309613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.549 [2024-10-07 11:31:47.309645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.549 [2024-10-07 11:31:47.309677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.549 [2024-10-07 11:31:47.309695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.549 [2024-10-07 11:31:47.309710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.310456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.313083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.313192] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.313223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.550 [2024-10-07 11:31:47.313240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.313517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.313681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.313716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.313733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.313841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.319939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.320064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.320095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.550 [2024-10-07 11:31:47.320112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.320144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.320177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.320195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.320209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.320239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.323172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.323283] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.323314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.550 [2024-10-07 11:31:47.323349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.323387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.323420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.323438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.323453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.324184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.330434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.330553] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.330584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.550 [2024-10-07 11:31:47.330601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.330633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.330665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.330683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.330697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.330727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.333677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.333787] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.333819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.550 [2024-10-07 11:31:47.333836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.333886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.333920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.333937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.333951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.333982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.340570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.340690] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.340721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.550 [2024-10-07 11:31:47.340738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.340989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.341134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.341167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.341184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.341291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.344118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.344229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.344260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.550 [2024-10-07 11:31:47.344277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.344308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.344358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.344377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.344391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.344422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.350669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.350782] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.350813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.550 [2024-10-07 11:31:47.350831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.350862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.350895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.350913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.350944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.351690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.354274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.354421] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.354453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.550 [2024-10-07 11:31:47.354471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.354723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.354867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.354912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.354928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.355036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.361074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.361187] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.361224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.550 [2024-10-07 11:31:47.361241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.361273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.361305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.361337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.361352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.361383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.364396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.364511] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.364542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.550 [2024-10-07 11:31:47.364559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.364590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.364623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.364641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.364655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.365393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.371537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.371671] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.371703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.550 [2024-10-07 11:31:47.371720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.371752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.371784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.371802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.371816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.371846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.374788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.374900] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.374930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.550 [2024-10-07 11:31:47.374947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.374979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.375011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.375029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.375043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.375082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.381703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.381816] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.381847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.550 [2024-10-07 11:31:47.381869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.382120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.382279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.382339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.382358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.382467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.385197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.385306] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.385363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.550 [2024-10-07 11:31:47.385381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.385413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.385462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.385481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.385496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.385526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.391788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.391901] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.391932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.550 [2024-10-07 11:31:47.391949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.391981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.392013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.392031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.392045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.392785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.395452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.395564] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.395594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.550 [2024-10-07 11:31:47.395611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.395887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.396072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.396107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.396124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.396231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.402210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.402357] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.402389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.550 [2024-10-07 11:31:47.402407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.402439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.402472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.402490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.402504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.402553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.405542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.405652] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.405692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.550 [2024-10-07 11:31:47.405709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.405741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.550 [2024-10-07 11:31:47.406492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.550 [2024-10-07 11:31:47.406531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.550 [2024-10-07 11:31:47.406548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.550 [2024-10-07 11:31:47.406719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.550 [2024-10-07 11:31:47.412676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.550 [2024-10-07 11:31:47.412789] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.550 [2024-10-07 11:31:47.412820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.550 [2024-10-07 11:31:47.412837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.550 [2024-10-07 11:31:47.412880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.412912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.412930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.412944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.412974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 [2024-10-07 11:31:47.415968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.416081] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.416113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.551 [2024-10-07 11:31:47.416130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.416162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.416195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.416212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.416226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.416257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 [2024-10-07 11:31:47.422960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.423072] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.423104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.551 [2024-10-07 11:31:47.423138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.423413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.423575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.423610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.423628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.423735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 [2024-10-07 11:31:47.426523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.426635] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.426666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.551 [2024-10-07 11:31:47.426683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.426715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.426748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.426766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.426780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.426810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 [2024-10-07 11:31:47.433048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.433160] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.433191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.551 [2024-10-07 11:31:47.433208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.433239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.433272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.433291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.433305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.433350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 [2024-10-07 11:31:47.436748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.436864] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.436896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.551 [2024-10-07 11:31:47.436913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.437165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.437341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.437401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.437419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.437529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 [2024-10-07 11:31:47.443565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.443680] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.443712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.551 [2024-10-07 11:31:47.443729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.443761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.443793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.443811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.443826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.443855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 [2024-10-07 11:31:47.446836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.446947] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.446977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.551 [2024-10-07 11:31:47.446994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.447026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.447059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.447076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.447091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.447844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 [2024-10-07 11:31:47.454010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.454126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.454157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.551 [2024-10-07 11:31:47.454174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.454206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.454238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.454257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.454270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.454313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 [2024-10-07 11:31:47.457327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.457439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.457470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.551 [2024-10-07 11:31:47.457487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.457519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.457558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.457576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.457589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.457620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 [2024-10-07 11:31:47.464306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.464439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.464469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.551 [2024-10-07 11:31:47.464487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.464740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.464900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.464935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.464952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.465059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 [2024-10-07 11:31:47.467847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.467958] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.467989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.551 [2024-10-07 11:31:47.468006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.468042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.468074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.468092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.468106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.468136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 [2024-10-07 11:31:47.474418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.474528] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.474560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.551 [2024-10-07 11:31:47.474577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.474628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.474661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.474678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.474693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.475431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 [2024-10-07 11:31:47.478070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.478182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.478213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.551 [2024-10-07 11:31:47.478229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.478520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.478682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.478743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.478759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.478867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 [2024-10-07 11:31:47.484872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.484985] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.485015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.551 [2024-10-07 11:31:47.485032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.485064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.485096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.485113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.485128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.485157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 [2024-10-07 11:31:47.488163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.488273] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.488304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.551 [2024-10-07 11:31:47.488338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.488373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.488405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.488423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.488454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.489178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 [2024-10-07 11:31:47.495303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.495432] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.495464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.551 [2024-10-07 11:31:47.495481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.495514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.495546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.495564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.495578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.495608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 8890.45 IOPS, 34.73 MiB/s [2024-10-07T11:31:53.074Z] [2024-10-07 11:31:47.501305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.502339] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.502384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.551 [2024-10-07 11:31:47.502405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.503204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.503406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.503433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.503448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.504776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 [2024-10-07 11:31:47.505611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.505942] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.505985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.551 [2024-10-07 11:31:47.506004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.506132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.506244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.506271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.506297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.551 [2024-10-07 11:31:47.506353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.551 [2024-10-07 11:31:47.512283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.551 [2024-10-07 11:31:47.512434] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.551 [2024-10-07 11:31:47.512466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.551 [2024-10-07 11:31:47.512484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.551 [2024-10-07 11:31:47.512516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.551 [2024-10-07 11:31:47.512548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.551 [2024-10-07 11:31:47.512566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.551 [2024-10-07 11:31:47.512580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.512611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.515699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.515812] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.515842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.552 [2024-10-07 11:31:47.515859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.516598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.516789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.516816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.516832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.516923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.522748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.522859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.522889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.552 [2024-10-07 11:31:47.522906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.522938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.522970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.522988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.523002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.523032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.525980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.526100] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.526131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.552 [2024-10-07 11:31:47.526148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.526197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.526231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.526249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.526263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.526306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.532846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.532960] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.532992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.552 [2024-10-07 11:31:47.533011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.533262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.533413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.533448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.533465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.533573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.536417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.536529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.536561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.552 [2024-10-07 11:31:47.536579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.536611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.536643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.536661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.536675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.536705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.542935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.543051] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.543082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.552 [2024-10-07 11:31:47.543100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.543131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.543885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.543922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.543957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.544149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.546529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.546640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.546671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.552 [2024-10-07 11:31:47.546689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.546951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.547084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.547107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.547121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.547226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.553447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.553562] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.553593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.552 [2024-10-07 11:31:47.553610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.553642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.553674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.553691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.553706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.553736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.556617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.556725] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.556756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.552 [2024-10-07 11:31:47.556773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.556804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.556836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.556855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.556869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.556898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.563984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.564097] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.564175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.552 [2024-10-07 11:31:47.564195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.564229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.564262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.564280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.564293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.564338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.567604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.567775] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.567835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.552 [2024-10-07 11:31:47.567867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.567920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.567977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.568006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.568030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.568075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.574381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.574510] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.574551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.552 [2024-10-07 11:31:47.574571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.574827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.574974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.575006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.575023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.575132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.578361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.578516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.578574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.552 [2024-10-07 11:31:47.578605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.578664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.578738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.578768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.578791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.578835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.585101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.586184] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.586248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.552 [2024-10-07 11:31:47.586280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.586569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.586763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.586819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.586846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.588430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.589695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.589835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.589879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.552 [2024-10-07 11:31:47.589899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.589933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.589966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.589985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.590003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.590034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.595566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.595684] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.595727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.552 [2024-10-07 11:31:47.595747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.595780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.595813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.595832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.595847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.595878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.600151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.600267] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.600306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.552 [2024-10-07 11:31:47.600339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.600373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.600407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.600426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.600439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.600469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.606186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.606332] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.606366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.552 [2024-10-07 11:31:47.606384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.606418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.606451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.606469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.606483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.606514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.610245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.610382] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.610415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.552 [2024-10-07 11:31:47.610442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.610474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.610507] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.610525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.610539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.610570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.616661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.616777] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.616809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.552 [2024-10-07 11:31:47.616843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.552 [2024-10-07 11:31:47.617115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.552 [2024-10-07 11:31:47.617284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.552 [2024-10-07 11:31:47.617331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.552 [2024-10-07 11:31:47.617351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.552 [2024-10-07 11:31:47.617460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.552 [2024-10-07 11:31:47.620366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.552 [2024-10-07 11:31:47.620480] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.552 [2024-10-07 11:31:47.620513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.553 [2024-10-07 11:31:47.620530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.620562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.620595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.620613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.620628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.620657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.626878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.626994] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.627025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.553 [2024-10-07 11:31:47.627042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.627074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.627106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.627124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.627138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.627168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.630833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.630955] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.630986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.553 [2024-10-07 11:31:47.631004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.631255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.631436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.631491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.631511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.631620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.637710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.637825] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.637856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.553 [2024-10-07 11:31:47.637873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.637905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.637937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.637955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.637969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.638000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.640922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.641034] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.641065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.553 [2024-10-07 11:31:47.641091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.641122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.641154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.641172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.641186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.641216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.648373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.648493] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.648526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.553 [2024-10-07 11:31:47.648543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.648575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.648607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.648625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.648640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.648671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.651681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.651814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.651846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.553 [2024-10-07 11:31:47.651864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.651898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.651931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.651950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.651964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.651995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.658649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.658764] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.658796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.553 [2024-10-07 11:31:47.658813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.659071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.659231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.659293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.659310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.659434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.662164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.662274] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.662332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.553 [2024-10-07 11:31:47.662353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.662387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.662420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.662438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.662452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.662482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.668744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.668859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.668890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.553 [2024-10-07 11:31:47.668907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.668959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.668992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.669010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.669024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.669055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.672511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.672625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.672656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.553 [2024-10-07 11:31:47.672674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.672943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.673104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.673144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.673161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.673268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.679286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.679425] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.679458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.553 [2024-10-07 11:31:47.679475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.679507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.679540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.679557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.679572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.679602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.682602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.682714] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.682745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.553 [2024-10-07 11:31:47.682762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.682793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.682825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.682843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.682875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.683626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.689792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.689905] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.689937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.553 [2024-10-07 11:31:47.689954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.689986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.690017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.690035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.690050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.690080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.693056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.693168] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.693201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.553 [2024-10-07 11:31:47.693219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.693250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.693282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.693301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.693329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.693364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.700068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.700178] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.700210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.553 [2024-10-07 11:31:47.700227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.700517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.700677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.700712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.700729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.700836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.703611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.703722] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.703776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.553 [2024-10-07 11:31:47.703795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.703828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.703860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.703879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.703893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.703923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.710165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.710278] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.710335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.553 [2024-10-07 11:31:47.710355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.710388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.711118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.711155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.711173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.711374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.713763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.713872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.713903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.553 [2024-10-07 11:31:47.713927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.714178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.714349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.714384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.714401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.714516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.553 [2024-10-07 11:31:47.720638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.553 [2024-10-07 11:31:47.720759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.553 [2024-10-07 11:31:47.720790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.553 [2024-10-07 11:31:47.720807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.553 [2024-10-07 11:31:47.720839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.553 [2024-10-07 11:31:47.720891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.553 [2024-10-07 11:31:47.720910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.553 [2024-10-07 11:31:47.720924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.553 [2024-10-07 11:31:47.720954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.723851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.723963] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.723993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.554 [2024-10-07 11:31:47.724010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.724041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.724074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.724092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.724106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.724136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.731180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.731306] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.731351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.554 [2024-10-07 11:31:47.731370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.731402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.731434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.731452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.731466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.731496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.734475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.734587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.734628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.554 [2024-10-07 11:31:47.734645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.734677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.734709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.734727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.734742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.734772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.741430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.741545] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.741576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.554 [2024-10-07 11:31:47.741594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.741845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.741991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.742022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.742039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.742146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.744996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.745106] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.745136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.554 [2024-10-07 11:31:47.745154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.745186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.745219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.745237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.745251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.745281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.751560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.751673] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.751705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.554 [2024-10-07 11:31:47.751722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.751754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.751786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.751804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.751818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.751848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.755479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.755590] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.755621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.554 [2024-10-07 11:31:47.755655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.755908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.756055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.756085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.756102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.756208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.762408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.762524] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.762556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.554 [2024-10-07 11:31:47.762573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.762605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.762637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.762655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.762670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.762701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.765567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.765677] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.765707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.554 [2024-10-07 11:31:47.765725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.765756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.765788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.765806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.765821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.765856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.772941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.773054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.773086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.554 [2024-10-07 11:31:47.773103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.773135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.773167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.773202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.773217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.773249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.776218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.776345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.776378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.554 [2024-10-07 11:31:47.776395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.776430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.776462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.776480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.776494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.776524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.783167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.783280] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.783311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.554 [2024-10-07 11:31:47.783344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.783596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.783741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.783791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.783808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.783915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.786758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.786868] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.786915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.554 [2024-10-07 11:31:47.786933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.786965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.786997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.787015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.787029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.787059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.793256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.793401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.793463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.554 [2024-10-07 11:31:47.793482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.793514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.793547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.793564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.793578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.793609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.796999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.797112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.797150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.554 [2024-10-07 11:31:47.797168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.797437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.797582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.797613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.797630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.797737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.803920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.804033] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.804064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.554 [2024-10-07 11:31:47.804081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.804113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.804145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.804163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.804177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.804207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.807088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.807201] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.807237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.554 [2024-10-07 11:31:47.807255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.807306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.807355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.807374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.807389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.807419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.814447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.814561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.814599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.554 [2024-10-07 11:31:47.814618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.814664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.814698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.814716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.814730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.814760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.817692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.817801] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.817831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.554 [2024-10-07 11:31:47.817848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.817880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.817919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.817937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.817951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.817981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.824757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.824869] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.824900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.554 [2024-10-07 11:31:47.824917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.825169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.825345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.825388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.825420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.554 [2024-10-07 11:31:47.825532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.554 [2024-10-07 11:31:47.828268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.554 [2024-10-07 11:31:47.828394] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.554 [2024-10-07 11:31:47.828426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.554 [2024-10-07 11:31:47.828444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.554 [2024-10-07 11:31:47.828476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.554 [2024-10-07 11:31:47.828508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.554 [2024-10-07 11:31:47.828526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.554 [2024-10-07 11:31:47.828540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.828571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.834850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.834963] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.834994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.555 [2024-10-07 11:31:47.835012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.835043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.835075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.835093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.835107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.835138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.838563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.838687] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.838720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.555 [2024-10-07 11:31:47.838737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.838991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.839152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.839187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.839204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.839311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.845331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.845448] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.845496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.555 [2024-10-07 11:31:47.845516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.845548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.845580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.845598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.845613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.845644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.848656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.848771] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.848802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.555 [2024-10-07 11:31:47.848832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.848863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.849606] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.849665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.849683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.849857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.855815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.855934] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.855966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.555 [2024-10-07 11:31:47.855984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.856016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.856049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.856067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.856080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.856111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.859265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.859397] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.859430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.555 [2024-10-07 11:31:47.859448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.859487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.859539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.859559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.859573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.859604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.866034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.866151] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.866183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.555 [2024-10-07 11:31:47.866202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.866485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.866623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.866657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.866675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.866782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.869613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.869726] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.869757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.555 [2024-10-07 11:31:47.869774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.869807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.869839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.869857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.869871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.869901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.876126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.876242] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.876274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.555 [2024-10-07 11:31:47.876291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.876339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.876375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.876393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.876408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.877130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.879745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.879859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.879890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.555 [2024-10-07 11:31:47.879907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.880161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.880371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.880408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.880425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.880533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.886574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.886688] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.886719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.555 [2024-10-07 11:31:47.886736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.886767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.886800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.886824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.886838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.886869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.889836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.889944] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.889974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.555 [2024-10-07 11:31:47.889991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.890022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.890054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.890072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.890086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.890837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.897000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.897112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.897143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.555 [2024-10-07 11:31:47.897181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.897215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.897247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.897265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.897281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.897312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.900256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.900384] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.900416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.555 [2024-10-07 11:31:47.900434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.900466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.900498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.900516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.900530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.900560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.907214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.907341] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.907373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.555 [2024-10-07 11:31:47.907391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.907648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.907807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.907839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.907856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.907964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.910743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.910854] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.910885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.555 [2024-10-07 11:31:47.910902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.910934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.910967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.911002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.911018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.911049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.917304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.917441] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.917473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.555 [2024-10-07 11:31:47.917490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.917527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.917559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.917578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.917592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.918350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.921000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.921111] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.921142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.555 [2024-10-07 11:31:47.921160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.921446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.921606] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.921642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.921659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.921767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.927948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.928062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.928094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.555 [2024-10-07 11:31:47.928111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.555 [2024-10-07 11:31:47.928143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.555 [2024-10-07 11:31:47.928175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.555 [2024-10-07 11:31:47.928193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.555 [2024-10-07 11:31:47.928207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.555 [2024-10-07 11:31:47.928238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.555 [2024-10-07 11:31:47.931089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.555 [2024-10-07 11:31:47.931231] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.555 [2024-10-07 11:31:47.931262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.555 [2024-10-07 11:31:47.931279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:47.931311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:47.931361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:47.931380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:47.931394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:47.931424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:47.938574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:47.938688] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:47.938719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.556 [2024-10-07 11:31:47.938736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:47.938768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:47.938800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:47.938818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:47.938832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:47.938862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:47.941892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:47.942011] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:47.942042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.556 [2024-10-07 11:31:47.942059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:47.942091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:47.942123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:47.942141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:47.942156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:47.942185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:47.948889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:47.949001] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:47.949033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.556 [2024-10-07 11:31:47.949050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:47.949352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:47.949516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:47.949550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:47.949569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:47.949678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:47.952409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:47.952520] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:47.952551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.556 [2024-10-07 11:31:47.952567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:47.952599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:47.952631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:47.952649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:47.952663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:47.952692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:47.958978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:47.959091] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:47.959122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.556 [2024-10-07 11:31:47.959139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:47.959171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:47.959203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:47.959221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:47.959235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:47.959266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:47.962824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:47.962934] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:47.962965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.556 [2024-10-07 11:31:47.962981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:47.963233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:47.963410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:47.963446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:47.963481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:47.963590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:47.969677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:47.969795] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:47.969826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.556 [2024-10-07 11:31:47.969843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:47.969874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:47.969907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:47.969925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:47.969939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:47.969969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:47.972915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:47.973025] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:47.973056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.556 [2024-10-07 11:31:47.973073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:47.973105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:47.973137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:47.973155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:47.973170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:47.973199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:47.980276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:47.980404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:47.980445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.556 [2024-10-07 11:31:47.980462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:47.980494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:47.980526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:47.980544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:47.980558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:47.980589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:47.983611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:47.983723] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:47.983772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.556 [2024-10-07 11:31:47.983791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:47.983823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:47.983857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:47.983875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:47.983889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:47.983919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:47.990631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:47.990744] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:47.990782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.556 [2024-10-07 11:31:47.990800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:47.991051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:47.991209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:47.991243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:47.991261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:47.991382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:47.994113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:47.994222] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:47.994257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.556 [2024-10-07 11:31:47.994274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:47.994331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:47.994368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:47.994386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:47.994401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:47.994431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:48.000719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:48.000831] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:48.000863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.556 [2024-10-07 11:31:48.000880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:48.000912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:48.000961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:48.000980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:48.000995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:48.001025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:48.004512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:48.004624] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:48.004655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.556 [2024-10-07 11:31:48.004672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:48.004938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:48.005099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:48.005133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:48.005151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:48.005258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:48.011369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:48.011483] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:48.011516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.556 [2024-10-07 11:31:48.011533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:48.011566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:48.011598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:48.011616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:48.011630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:48.011660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:48.014601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:48.014712] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:48.014744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.556 [2024-10-07 11:31:48.014762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:48.014793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:48.014825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:48.014843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:48.014857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:48.014905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:48.021883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:48.021998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:48.022031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.556 [2024-10-07 11:31:48.022049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:48.022080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:48.022113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:48.022131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:48.022145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:48.022175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:48.025151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:48.025262] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:48.025293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.556 [2024-10-07 11:31:48.025311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:48.025362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:48.025394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:48.025413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:48.025433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:48.025463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:48.032147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:48.032261] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:48.032292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.556 [2024-10-07 11:31:48.032309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:48.032580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:48.032740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:48.032774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:48.032792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:48.032899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:48.035690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:48.035801] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:48.035831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.556 [2024-10-07 11:31:48.035866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.556 [2024-10-07 11:31:48.035900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.556 [2024-10-07 11:31:48.035932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.556 [2024-10-07 11:31:48.035950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.556 [2024-10-07 11:31:48.035964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.556 [2024-10-07 11:31:48.035994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.556 [2024-10-07 11:31:48.042238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.556 [2024-10-07 11:31:48.042372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.556 [2024-10-07 11:31:48.042404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.557 [2024-10-07 11:31:48.042422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.042454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.042486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.042504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.042519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.042550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.045921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.046032] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.046063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.557 [2024-10-07 11:31:48.046080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.046362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.046523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.046559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.046576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.046684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.052802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.052914] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.052955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.557 [2024-10-07 11:31:48.052972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.053004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.053036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.053071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.053087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.053119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.056007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.056118] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.056149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.557 [2024-10-07 11:31:48.056166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.056198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.056230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.056248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.056263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.056292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.063340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.063453] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.063485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.557 [2024-10-07 11:31:48.063502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.063534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.063566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.063583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.063598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.063628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.066625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.066737] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.066768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.557 [2024-10-07 11:31:48.066786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.066817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.066848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.066866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.066880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.066910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.073619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.073758] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.073799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.557 [2024-10-07 11:31:48.073816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.074067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.074227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.074261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.074278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.074412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.077115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.077225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.077255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.557 [2024-10-07 11:31:48.077272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.077304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.077358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.077378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.077392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.077422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.083724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.083836] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.083867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.557 [2024-10-07 11:31:48.083884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.083916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.083948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.083965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.083980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.084729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.087355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.087465] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.087497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.557 [2024-10-07 11:31:48.087514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.087785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.087958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.087994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.088011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.088119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.094176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.094302] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.094348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.557 [2024-10-07 11:31:48.094367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.094401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.094434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.094451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.094465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.094496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.097446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.097555] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.097585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.557 [2024-10-07 11:31:48.097602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.097634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.097665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.097683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.097698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.098444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.104603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.104718] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.104749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.557 [2024-10-07 11:31:48.104767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.104798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.104830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.104848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.104879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.104913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.107898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.108010] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.108041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.557 [2024-10-07 11:31:48.108058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.108090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.108123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.108141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.108155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.108187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.114868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.114983] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.115014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.557 [2024-10-07 11:31:48.115032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.115297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.115485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.115521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.115538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.115646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.118382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.118492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.118523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.557 [2024-10-07 11:31:48.118540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.118571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.118603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.118621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.118636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.118665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.124961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.125073] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.125121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.557 [2024-10-07 11:31:48.125140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.125172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.125914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.125951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.125970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.126141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.128588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.128700] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.128731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.557 [2024-10-07 11:31:48.128749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.129000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.129164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.129199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.129216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.129337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.135360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.135473] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.135505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.557 [2024-10-07 11:31:48.135522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.135553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.135585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.135603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.135617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.135647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.557 [2024-10-07 11:31:48.138678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.557 [2024-10-07 11:31:48.138790] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.557 [2024-10-07 11:31:48.138821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.557 [2024-10-07 11:31:48.138838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.557 [2024-10-07 11:31:48.138870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.557 [2024-10-07 11:31:48.139631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.557 [2024-10-07 11:31:48.139670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.557 [2024-10-07 11:31:48.139688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.557 [2024-10-07 11:31:48.139858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.145765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.145878] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.145910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.558 [2024-10-07 11:31:48.145928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.145960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.145992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.146010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.146024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.146055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.149024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.149134] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.149165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.558 [2024-10-07 11:31:48.149183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.149214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.149246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.149264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.149278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.149308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.155953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.156067] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.156098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.558 [2024-10-07 11:31:48.156116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.156394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.156541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.156573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.156590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.156719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.159543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.159655] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.159687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.558 [2024-10-07 11:31:48.159704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.159735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.159767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.159785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.159799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.159829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.166042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.166158] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.166190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.558 [2024-10-07 11:31:48.166207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.166238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.166270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.166300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.166329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.167057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.169683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.169792] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.169822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.558 [2024-10-07 11:31:48.169839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.170090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.170253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.170296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.170328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.170441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.176516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.176629] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.176661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.558 [2024-10-07 11:31:48.176695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.176729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.176762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.176780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.176795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.176825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.179768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.179880] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.179910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.558 [2024-10-07 11:31:48.179927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.179959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.179991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.180009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.180023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.180764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.187004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.187117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.187148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.558 [2024-10-07 11:31:48.187165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.187198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.187230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.187248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.187262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.187291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.190239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.190372] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.190404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.558 [2024-10-07 11:31:48.190422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.190454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.190486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.190522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.190537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.190569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.197153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.197267] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.197298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.558 [2024-10-07 11:31:48.197329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.197585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.197751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.197777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.197792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.197899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.200744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.200856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.200887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.558 [2024-10-07 11:31:48.200904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.200935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.200968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.200986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.201001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.201030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.207244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.207369] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.207401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.558 [2024-10-07 11:31:48.207419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.207451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.207483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.207501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.207515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.207545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.210904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.211034] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.211066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.558 [2024-10-07 11:31:48.211084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.211351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.211487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.211521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.211538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.211645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.217764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.217889] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.217921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.558 [2024-10-07 11:31:48.217938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.217970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.218002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.218020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.218035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.218065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.221011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.221121] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.221155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.558 [2024-10-07 11:31:48.221172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.221203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.221235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.221253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.221268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.221297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.228277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.228404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.228436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.558 [2024-10-07 11:31:48.228454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.228504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.228538] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.228555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.228570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.228601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.231554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.231668] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.231698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.558 [2024-10-07 11:31:48.231716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.231748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.231783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.231801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.231815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.231845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.238545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.238659] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.238690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.558 [2024-10-07 11:31:48.238708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.238959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.239105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.239138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.239154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.239262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.242128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.242238] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.242269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.558 [2024-10-07 11:31:48.242297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.242346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.242382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.242401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.242432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.558 [2024-10-07 11:31:48.242464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.558 [2024-10-07 11:31:48.248632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.558 [2024-10-07 11:31:48.248744] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.558 [2024-10-07 11:31:48.248776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.558 [2024-10-07 11:31:48.248793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.558 [2024-10-07 11:31:48.248825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.558 [2024-10-07 11:31:48.248857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.558 [2024-10-07 11:31:48.248875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.558 [2024-10-07 11:31:48.248890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.248920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.252561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.252672] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.252703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.559 [2024-10-07 11:31:48.252721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.252972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.253158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.253193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.253216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.253337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.259485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.259604] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.259636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.559 [2024-10-07 11:31:48.259653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.259685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.259717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.259735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.259750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.259780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.262651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.262759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.262806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.559 [2024-10-07 11:31:48.262825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.262858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.262890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.262908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.262922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.262953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.270027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.270145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.270177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.559 [2024-10-07 11:31:48.270195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.270228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.270260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.270278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.270305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.270355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.273341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.273452] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.273483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.559 [2024-10-07 11:31:48.273501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.273533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.273565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.273583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.273598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.273628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.280451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.280562] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.280594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.559 [2024-10-07 11:31:48.280611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.280862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.281039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.281074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.281092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.281200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.283958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.284069] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.284100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.559 [2024-10-07 11:31:48.284117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.284148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.284180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.284198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.284212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.284243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.290538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.290651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.290682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.559 [2024-10-07 11:31:48.290699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.290731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.290763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.290781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.290795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.290825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.294337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.294448] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.294480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.559 [2024-10-07 11:31:48.294497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.294756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.294915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.294949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.294966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.295091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.301184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.301298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.301344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.559 [2024-10-07 11:31:48.301363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.301396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.301428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.301446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.301460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.301490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.304424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.304534] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.304565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.559 [2024-10-07 11:31:48.304582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.304614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.304646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.304663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.304678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.304707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.311757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.311873] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.311905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.559 [2024-10-07 11:31:48.311922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.311954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.311986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.312007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.312022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.312052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.315086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.315199] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.315230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.559 [2024-10-07 11:31:48.315264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.315298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.315347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.315368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.315382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.315412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.322073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.322190] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.322221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.559 [2024-10-07 11:31:48.322239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.322535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.322706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.322741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.322759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.322867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.325583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.325694] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.325725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.559 [2024-10-07 11:31:48.325743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.325775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.325806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.325825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.325839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.325869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.332165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.332275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.332306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.559 [2024-10-07 11:31:48.332339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.332373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.332405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.332440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.332456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.332488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.335887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.336001] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.336033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.559 [2024-10-07 11:31:48.336050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.336330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.336492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.336527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.336544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.336651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.342701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.342817] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.342848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.559 [2024-10-07 11:31:48.342865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.342896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.342928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.342946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.342960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.342991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.345977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.346085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.346115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.559 [2024-10-07 11:31:48.346132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.346164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.346196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.346214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.346228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.346259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.353201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.353344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.353378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.559 [2024-10-07 11:31:48.353395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.353428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.559 [2024-10-07 11:31:48.353460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.559 [2024-10-07 11:31:48.353478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.559 [2024-10-07 11:31:48.353492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.559 [2024-10-07 11:31:48.353524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.559 [2024-10-07 11:31:48.356469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.559 [2024-10-07 11:31:48.356581] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.559 [2024-10-07 11:31:48.356612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.559 [2024-10-07 11:31:48.356629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.559 [2024-10-07 11:31:48.356661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.560 [2024-10-07 11:31:48.356693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.560 [2024-10-07 11:31:48.356711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.560 [2024-10-07 11:31:48.356725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.560 [2024-10-07 11:31:48.356755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.560 [2024-10-07 11:31:48.363504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.560 [2024-10-07 11:31:48.363617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.560 [2024-10-07 11:31:48.363648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.560 [2024-10-07 11:31:48.363666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.560 [2024-10-07 11:31:48.363917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.560 [2024-10-07 11:31:48.364078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.560 [2024-10-07 11:31:48.364114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.560 [2024-10-07 11:31:48.364131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.560 [2024-10-07 11:31:48.364239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.560 [2024-10-07 11:31:48.367016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.560 [2024-10-07 11:31:48.367126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.560 [2024-10-07 11:31:48.367157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.560 [2024-10-07 11:31:48.367174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.560 [2024-10-07 11:31:48.367221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.560 [2024-10-07 11:31:48.367254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.560 [2024-10-07 11:31:48.367272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.560 [2024-10-07 11:31:48.367286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.560 [2024-10-07 11:31:48.367332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.560 [2024-10-07 11:31:48.373593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.560 [2024-10-07 11:31:48.373699] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.560 [2024-10-07 11:31:48.373730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.560 [2024-10-07 11:31:48.373747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.560 [2024-10-07 11:31:48.373778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.560 [2024-10-07 11:31:48.373811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.560 [2024-10-07 11:31:48.373829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.560 [2024-10-07 11:31:48.373842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.560 [2024-10-07 11:31:48.373873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.560 [2024-10-07 11:31:48.377289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.560 [2024-10-07 11:31:48.377412] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.560 [2024-10-07 11:31:48.377444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.560 [2024-10-07 11:31:48.377461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.560 [2024-10-07 11:31:48.377712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.560 [2024-10-07 11:31:48.377846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.560 [2024-10-07 11:31:48.377879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.560 [2024-10-07 11:31:48.377897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.560 [2024-10-07 11:31:48.378004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.560 [2024-10-07 11:31:48.384166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.560 [2024-10-07 11:31:48.384280] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.560 [2024-10-07 11:31:48.384312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.560 [2024-10-07 11:31:48.384345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.560 [2024-10-07 11:31:48.384377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.560 [2024-10-07 11:31:48.384410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.560 [2024-10-07 11:31:48.384428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.560 [2024-10-07 11:31:48.384459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.560 [2024-10-07 11:31:48.384492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.560 [2024-10-07 11:31:48.387387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.560 [2024-10-07 11:31:48.387500] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.560 [2024-10-07 11:31:48.387531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.560 [2024-10-07 11:31:48.387548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.560 [2024-10-07 11:31:48.387580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.560 [2024-10-07 11:31:48.387612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.560 [2024-10-07 11:31:48.387630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.560 [2024-10-07 11:31:48.387644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.560 [2024-10-07 11:31:48.387674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.560 [2024-10-07 11:31:48.394820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.560 [2024-10-07 11:31:48.394934] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.560 [2024-10-07 11:31:48.394965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.560 [2024-10-07 11:31:48.394983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.560 [2024-10-07 11:31:48.395015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.560 [2024-10-07 11:31:48.395047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.560 [2024-10-07 11:31:48.395064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.560 [2024-10-07 11:31:48.395079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.560 [2024-10-07 11:31:48.395110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.560 [2024-10-07 11:31:48.398061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.560 [2024-10-07 11:31:48.398171] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.560 [2024-10-07 11:31:48.398202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.561 [2024-10-07 11:31:48.398219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.561 [2024-10-07 11:31:48.398250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.561 [2024-10-07 11:31:48.398282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.561 [2024-10-07 11:31:48.398314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.561 [2024-10-07 11:31:48.398347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.561 [2024-10-07 11:31:48.398380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.561 [2024-10-07 11:31:48.405102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.561 [2024-10-07 11:31:48.405218] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.561 [2024-10-07 11:31:48.405266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.561 [2024-10-07 11:31:48.405285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.561 [2024-10-07 11:31:48.405553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.561 [2024-10-07 11:31:48.405689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.561 [2024-10-07 11:31:48.405723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.561 [2024-10-07 11:31:48.405740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.561 [2024-10-07 11:31:48.405848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.561 [2024-10-07 11:31:48.408688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.561 [2024-10-07 11:31:48.408804] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.561 [2024-10-07 11:31:48.408835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.561 [2024-10-07 11:31:48.408852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.561 [2024-10-07 11:31:48.408885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.561 [2024-10-07 11:31:48.408917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.561 [2024-10-07 11:31:48.408935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.561 [2024-10-07 11:31:48.408949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.561 [2024-10-07 11:31:48.408979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.561 [2024-10-07 11:31:48.415195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.561 [2024-10-07 11:31:48.415309] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.561 [2024-10-07 11:31:48.415356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.561 [2024-10-07 11:31:48.415374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.561 [2024-10-07 11:31:48.415406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.561 [2024-10-07 11:31:48.415438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.561 [2024-10-07 11:31:48.415456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.561 [2024-10-07 11:31:48.415471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.561 [2024-10-07 11:31:48.415502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.561 [2024-10-07 11:31:48.418858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.561 [2024-10-07 11:31:48.418971] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.561 [2024-10-07 11:31:48.419002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.561 [2024-10-07 11:31:48.419020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.561 [2024-10-07 11:31:48.419270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.561 [2024-10-07 11:31:48.419443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.561 [2024-10-07 11:31:48.419478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.561 [2024-10-07 11:31:48.419495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.561 [2024-10-07 11:31:48.419603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.561 [2024-10-07 11:31:48.425727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.561 [2024-10-07 11:31:48.425841] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.561 [2024-10-07 11:31:48.425872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.561 [2024-10-07 11:31:48.425890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.561 [2024-10-07 11:31:48.425922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.561 [2024-10-07 11:31:48.425953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.561 [2024-10-07 11:31:48.425971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.561 [2024-10-07 11:31:48.425985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.561 [2024-10-07 11:31:48.426016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.561 [2024-10-07 11:31:48.428956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.561 [2024-10-07 11:31:48.429066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.561 [2024-10-07 11:31:48.429097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.561 [2024-10-07 11:31:48.429115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.561 [2024-10-07 11:31:48.429146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.561 [2024-10-07 11:31:48.429178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.561 [2024-10-07 11:31:48.429196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.561 [2024-10-07 11:31:48.429210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.561 [2024-10-07 11:31:48.429947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.561 [2024-10-07 11:31:48.436171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.561 [2024-10-07 11:31:48.436285] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.561 [2024-10-07 11:31:48.436343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.561 [2024-10-07 11:31:48.436363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.561 [2024-10-07 11:31:48.436396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.561 [2024-10-07 11:31:48.436428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.561 [2024-10-07 11:31:48.436446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.561 [2024-10-07 11:31:48.436460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.561 [2024-10-07 11:31:48.436509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.561 [2024-10-07 11:31:48.439500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.561 [2024-10-07 11:31:48.439615] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.561 [2024-10-07 11:31:48.439647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.561 [2024-10-07 11:31:48.439664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.561 [2024-10-07 11:31:48.439697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.561 [2024-10-07 11:31:48.439729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.561 [2024-10-07 11:31:48.439746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.561 [2024-10-07 11:31:48.439761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.561 [2024-10-07 11:31:48.439791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.561 [2024-10-07 11:31:48.446472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.561 [2024-10-07 11:31:48.446586] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.561 [2024-10-07 11:31:48.446618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.561 [2024-10-07 11:31:48.446635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.561 [2024-10-07 11:31:48.446901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.561 [2024-10-07 11:31:48.447055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.561 [2024-10-07 11:31:48.447087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.561 [2024-10-07 11:31:48.447105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.561 [2024-10-07 11:31:48.447212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.561 [2024-10-07 11:31:48.450047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.561 [2024-10-07 11:31:48.450156] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.561 [2024-10-07 11:31:48.450187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.561 [2024-10-07 11:31:48.450204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.561 [2024-10-07 11:31:48.450236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.561 [2024-10-07 11:31:48.450268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.561 [2024-10-07 11:31:48.450300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.561 [2024-10-07 11:31:48.450331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.561 [2024-10-07 11:31:48.450367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.561 [2024-10-07 11:31:48.456563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.562 [2024-10-07 11:31:48.456675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.562 [2024-10-07 11:31:48.456706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.562 [2024-10-07 11:31:48.456739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.562 [2024-10-07 11:31:48.456774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.562 [2024-10-07 11:31:48.456807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.562 [2024-10-07 11:31:48.456825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.562 [2024-10-07 11:31:48.456838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.562 [2024-10-07 11:31:48.457576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.562 [2024-10-07 11:31:48.460209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.562 [2024-10-07 11:31:48.460333] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.562 [2024-10-07 11:31:48.460365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.562 [2024-10-07 11:31:48.460382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.562 [2024-10-07 11:31:48.460633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.562 [2024-10-07 11:31:48.460778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.562 [2024-10-07 11:31:48.460810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.562 [2024-10-07 11:31:48.460826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.562 [2024-10-07 11:31:48.460933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.562 [2024-10-07 11:31:48.467004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.562 [2024-10-07 11:31:48.467118] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.562 [2024-10-07 11:31:48.467150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.562 [2024-10-07 11:31:48.467167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.562 [2024-10-07 11:31:48.467199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.562 [2024-10-07 11:31:48.467232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.562 [2024-10-07 11:31:48.467250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.562 [2024-10-07 11:31:48.467264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.562 [2024-10-07 11:31:48.467294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.562 [2024-10-07 11:31:48.470312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.562 [2024-10-07 11:31:48.470436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.562 [2024-10-07 11:31:48.470467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.562 [2024-10-07 11:31:48.470484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.562 [2024-10-07 11:31:48.470516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.562 [2024-10-07 11:31:48.470547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.562 [2024-10-07 11:31:48.470583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.562 [2024-10-07 11:31:48.470598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.562 [2024-10-07 11:31:48.471337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.562 [2024-10-07 11:31:48.477476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.562 [2024-10-07 11:31:48.477595] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.562 [2024-10-07 11:31:48.477627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.562 [2024-10-07 11:31:48.477644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.562 [2024-10-07 11:31:48.477676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.562 [2024-10-07 11:31:48.477708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.562 [2024-10-07 11:31:48.477726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.562 [2024-10-07 11:31:48.477740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.562 [2024-10-07 11:31:48.477770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.562 [2024-10-07 11:31:48.481050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.562 [2024-10-07 11:31:48.481213] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.562 [2024-10-07 11:31:48.481253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.562 [2024-10-07 11:31:48.481278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.562 [2024-10-07 11:31:48.481338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.562 [2024-10-07 11:31:48.481398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.562 [2024-10-07 11:31:48.481424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.562 [2024-10-07 11:31:48.481444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.562 [2024-10-07 11:31:48.481487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.562 [2024-10-07 11:31:48.487702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.562 [2024-10-07 11:31:48.487825] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.562 [2024-10-07 11:31:48.487858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.562 [2024-10-07 11:31:48.487876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.562 [2024-10-07 11:31:48.488129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.562 [2024-10-07 11:31:48.488309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.562 [2024-10-07 11:31:48.488358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.562 [2024-10-07 11:31:48.488375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.562 [2024-10-07 11:31:48.488486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.562 [2024-10-07 11:31:48.491699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.562 [2024-10-07 11:31:48.491859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.562 [2024-10-07 11:31:48.491904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.562 [2024-10-07 11:31:48.491931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.562 [2024-10-07 11:31:48.491976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.562 [2024-10-07 11:31:48.492021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.562 [2024-10-07 11:31:48.492048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.562 [2024-10-07 11:31:48.492071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.562 [2024-10-07 11:31:48.493052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.562 [2024-10-07 11:31:48.501026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.562 8920.92 IOPS, 34.85 MiB/s [2024-10-07T11:31:53.085Z] [2024-10-07 11:31:48.501248] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.562 [2024-10-07 11:31:48.501284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.562 [2024-10-07 11:31:48.501302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.562 [2024-10-07 11:31:48.502629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.562 [2024-10-07 11:31:48.503588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.562 [2024-10-07 11:31:48.503629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.562 [2024-10-07 11:31:48.503648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.562 [2024-10-07 11:31:48.503843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.562 [2024-10-07 11:31:48.503874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.562 [2024-10-07 11:31:48.504032] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.562 [2024-10-07 11:31:48.504064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.562 [2024-10-07 11:31:48.504081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.562 [2024-10-07 11:31:48.505374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.562 [2024-10-07 11:31:48.506156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.562 [2024-10-07 11:31:48.506195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.562 [2024-10-07 11:31:48.506213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.562 [2024-10-07 11:31:48.507113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.562 [2024-10-07 11:31:48.511370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.562 [2024-10-07 11:31:48.511485] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.562 [2024-10-07 11:31:48.511517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.563 [2024-10-07 11:31:48.511535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.563 [2024-10-07 11:31:48.511587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.563 [2024-10-07 11:31:48.511620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.563 [2024-10-07 11:31:48.511639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.563 [2024-10-07 11:31:48.511653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.563 [2024-10-07 11:31:48.511684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.563 [2024-10-07 11:31:48.515136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.563 [2024-10-07 11:31:48.515254] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.563 [2024-10-07 11:31:48.515288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.563 [2024-10-07 11:31:48.515306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.563 [2024-10-07 11:31:48.515573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.563 [2024-10-07 11:31:48.515722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.563 [2024-10-07 11:31:48.515759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.563 [2024-10-07 11:31:48.515776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.563 [2024-10-07 11:31:48.515884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.563 [2024-10-07 11:31:48.521896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.563 [2024-10-07 11:31:48.522013] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.563 [2024-10-07 11:31:48.522045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.563 [2024-10-07 11:31:48.522062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.563 [2024-10-07 11:31:48.522095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.563 [2024-10-07 11:31:48.522127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.563 [2024-10-07 11:31:48.522148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.563 [2024-10-07 11:31:48.522162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.563 [2024-10-07 11:31:48.522193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.563 [2024-10-07 11:31:48.525229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.563 [2024-10-07 11:31:48.525351] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.563 [2024-10-07 11:31:48.525384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.563 [2024-10-07 11:31:48.525401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.563 [2024-10-07 11:31:48.525434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.563 [2024-10-07 11:31:48.525466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.563 [2024-10-07 11:31:48.525484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.563 [2024-10-07 11:31:48.525514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.563 [2024-10-07 11:31:48.526239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.563 [2024-10-07 11:31:48.532385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.563 [2024-10-07 11:31:48.532502] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.563 [2024-10-07 11:31:48.532534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.563 [2024-10-07 11:31:48.532552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.563 [2024-10-07 11:31:48.532584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.563 [2024-10-07 11:31:48.532616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.563 [2024-10-07 11:31:48.532634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.563 [2024-10-07 11:31:48.532648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.563 [2024-10-07 11:31:48.532679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.563 [2024-10-07 11:31:48.535656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.563 [2024-10-07 11:31:48.535767] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.563 [2024-10-07 11:31:48.535799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.563 [2024-10-07 11:31:48.535816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.563 [2024-10-07 11:31:48.535848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.563 [2024-10-07 11:31:48.535883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.563 [2024-10-07 11:31:48.535902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.563 [2024-10-07 11:31:48.535916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.563 [2024-10-07 11:31:48.535946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.563 [2024-10-07 11:31:48.542564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.563 [2024-10-07 11:31:48.542680] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.563 [2024-10-07 11:31:48.542712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.563 [2024-10-07 11:31:48.542729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.563 [2024-10-07 11:31:48.542982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.563 [2024-10-07 11:31:48.543129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.563 [2024-10-07 11:31:48.543164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.563 [2024-10-07 11:31:48.543181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.563 [2024-10-07 11:31:48.543290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.563 [2024-10-07 11:31:48.546041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.563 [2024-10-07 11:31:48.546168] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.563 [2024-10-07 11:31:48.546200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.563 [2024-10-07 11:31:48.546217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.563 [2024-10-07 11:31:48.546249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.563 [2024-10-07 11:31:48.546281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.563 [2024-10-07 11:31:48.546330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.563 [2024-10-07 11:31:48.546348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.563 [2024-10-07 11:31:48.546382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.563 [2024-10-07 11:31:48.552655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.563 [2024-10-07 11:31:48.552769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.563 [2024-10-07 11:31:48.552800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.563 [2024-10-07 11:31:48.552818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.563 [2024-10-07 11:31:48.552849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.563 [2024-10-07 11:31:48.552881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.563 [2024-10-07 11:31:48.552899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.563 [2024-10-07 11:31:48.552915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.563 [2024-10-07 11:31:48.553652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.563 [2024-10-07 11:31:48.556265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.563 [2024-10-07 11:31:48.556399] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.563 [2024-10-07 11:31:48.556431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.563 [2024-10-07 11:31:48.556448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.563 [2024-10-07 11:31:48.556715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.563 [2024-10-07 11:31:48.556870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.563 [2024-10-07 11:31:48.556905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.563 [2024-10-07 11:31:48.556922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.563 [2024-10-07 11:31:48.557033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.563 [2024-10-07 11:31:48.563067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.563 [2024-10-07 11:31:48.563182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.563 [2024-10-07 11:31:48.563214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.563 [2024-10-07 11:31:48.563231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.563 [2024-10-07 11:31:48.563263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.564 [2024-10-07 11:31:48.563314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.564 [2024-10-07 11:31:48.563350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.564 [2024-10-07 11:31:48.563365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.564 [2024-10-07 11:31:48.563396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.564 [2024-10-07 11:31:48.566368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.564 [2024-10-07 11:31:48.566478] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.564 [2024-10-07 11:31:48.566510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.564 [2024-10-07 11:31:48.566527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.564 [2024-10-07 11:31:48.566559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.564 [2024-10-07 11:31:48.566591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.564 [2024-10-07 11:31:48.566609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.564 [2024-10-07 11:31:48.566624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.564 [2024-10-07 11:31:48.567361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.564 [2024-10-07 11:31:48.573521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.564 [2024-10-07 11:31:48.573635] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.564 [2024-10-07 11:31:48.573667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.564 [2024-10-07 11:31:48.573685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.564 [2024-10-07 11:31:48.573717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.564 [2024-10-07 11:31:48.573749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.564 [2024-10-07 11:31:48.573767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.564 [2024-10-07 11:31:48.573782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.564 [2024-10-07 11:31:48.573812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.564 [2024-10-07 11:31:48.576803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.564 [2024-10-07 11:31:48.576915] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.564 [2024-10-07 11:31:48.576947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.564 [2024-10-07 11:31:48.576964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.564 [2024-10-07 11:31:48.576996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.564 [2024-10-07 11:31:48.577029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.564 [2024-10-07 11:31:48.577047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.564 [2024-10-07 11:31:48.577061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.564 [2024-10-07 11:31:48.577107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.564 [2024-10-07 11:31:48.583771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.564 [2024-10-07 11:31:48.583894] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.564 [2024-10-07 11:31:48.583925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.564 [2024-10-07 11:31:48.583943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.564 [2024-10-07 11:31:48.584196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.564 [2024-10-07 11:31:48.584357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.564 [2024-10-07 11:31:48.584393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.564 [2024-10-07 11:31:48.584410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.564 [2024-10-07 11:31:48.584518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.564 [2024-10-07 11:31:48.587272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.564 [2024-10-07 11:31:48.587400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.564 [2024-10-07 11:31:48.587432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.564 [2024-10-07 11:31:48.587449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.564 [2024-10-07 11:31:48.587480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.564 [2024-10-07 11:31:48.587513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.564 [2024-10-07 11:31:48.587531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.564 [2024-10-07 11:31:48.587545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.564 [2024-10-07 11:31:48.587575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.564 [2024-10-07 11:31:48.593875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.564 [2024-10-07 11:31:48.593987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.564 [2024-10-07 11:31:48.594019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.564 [2024-10-07 11:31:48.594036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.564 [2024-10-07 11:31:48.594068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.564 [2024-10-07 11:31:48.594099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.564 [2024-10-07 11:31:48.594117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.564 [2024-10-07 11:31:48.594131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.564 [2024-10-07 11:31:48.594899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.564 [2024-10-07 11:31:48.597574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.564 [2024-10-07 11:31:48.597686] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.564 [2024-10-07 11:31:48.597717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.564 [2024-10-07 11:31:48.597756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.564 [2024-10-07 11:31:48.598011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.564 [2024-10-07 11:31:48.598170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.564 [2024-10-07 11:31:48.598206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.564 [2024-10-07 11:31:48.598223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.564 [2024-10-07 11:31:48.598365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.564 [2024-10-07 11:31:48.604421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.564 [2024-10-07 11:31:48.604536] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.564 [2024-10-07 11:31:48.604567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.564 [2024-10-07 11:31:48.604584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.564 [2024-10-07 11:31:48.604618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.564 [2024-10-07 11:31:48.604650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.564 [2024-10-07 11:31:48.604668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.564 [2024-10-07 11:31:48.604682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.564 [2024-10-07 11:31:48.604712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.564 [2024-10-07 11:31:48.607660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.564 [2024-10-07 11:31:48.607776] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.564 [2024-10-07 11:31:48.607807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.564 [2024-10-07 11:31:48.607824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.564 [2024-10-07 11:31:48.607856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.564 [2024-10-07 11:31:48.607888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.564 [2024-10-07 11:31:48.607906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.564 [2024-10-07 11:31:48.607920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.564 [2024-10-07 11:31:48.607950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.564 [2024-10-07 11:31:48.615170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.565 [2024-10-07 11:31:48.615284] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.565 [2024-10-07 11:31:48.615330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.565 [2024-10-07 11:31:48.615350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.565 [2024-10-07 11:31:48.615389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.565 [2024-10-07 11:31:48.615421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.565 [2024-10-07 11:31:48.615455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.565 [2024-10-07 11:31:48.615470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.565 [2024-10-07 11:31:48.615503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.565 [2024-10-07 11:31:48.618516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.565 [2024-10-07 11:31:48.618628] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.565 [2024-10-07 11:31:48.618659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.565 [2024-10-07 11:31:48.618677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.565 [2024-10-07 11:31:48.618708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.565 [2024-10-07 11:31:48.618741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.565 [2024-10-07 11:31:48.618759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.565 [2024-10-07 11:31:48.618773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.565 [2024-10-07 11:31:48.618803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.565 [2024-10-07 11:31:48.625556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.565 [2024-10-07 11:31:48.625667] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.565 [2024-10-07 11:31:48.625699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.565 [2024-10-07 11:31:48.625716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.565 [2024-10-07 11:31:48.625967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.565 [2024-10-07 11:31:48.626130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.565 [2024-10-07 11:31:48.626165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.565 [2024-10-07 11:31:48.626183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.565 [2024-10-07 11:31:48.626303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.565 [2024-10-07 11:31:48.629118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.565 [2024-10-07 11:31:48.629226] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.565 [2024-10-07 11:31:48.629258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.565 [2024-10-07 11:31:48.629275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.565 [2024-10-07 11:31:48.629307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.565 [2024-10-07 11:31:48.629355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.565 [2024-10-07 11:31:48.629374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.565 [2024-10-07 11:31:48.629388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.565 [2024-10-07 11:31:48.629418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.565 [2024-10-07 11:31:48.635644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.565 [2024-10-07 11:31:48.635757] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.565 [2024-10-07 11:31:48.635789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.565 [2024-10-07 11:31:48.635807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.565 [2024-10-07 11:31:48.635838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.565 [2024-10-07 11:31:48.635870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.565 [2024-10-07 11:31:48.635888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.565 [2024-10-07 11:31:48.635902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.565 [2024-10-07 11:31:48.635933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.565 [2024-10-07 11:31:48.639413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.565 [2024-10-07 11:31:48.639526] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.565 [2024-10-07 11:31:48.639557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.565 [2024-10-07 11:31:48.639574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.565 [2024-10-07 11:31:48.639826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.565 [2024-10-07 11:31:48.639972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.565 [2024-10-07 11:31:48.640008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.565 [2024-10-07 11:31:48.640025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.565 [2024-10-07 11:31:48.640138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.565 [2024-10-07 11:31:48.646226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.565 [2024-10-07 11:31:48.646362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.565 [2024-10-07 11:31:48.646394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.565 [2024-10-07 11:31:48.646412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.565 [2024-10-07 11:31:48.646445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.565 [2024-10-07 11:31:48.646477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.565 [2024-10-07 11:31:48.646500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.565 [2024-10-07 11:31:48.646514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.565 [2024-10-07 11:31:48.646544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.565 [2024-10-07 11:31:48.649503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.565 [2024-10-07 11:31:48.649612] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.565 [2024-10-07 11:31:48.649643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.565 [2024-10-07 11:31:48.649660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.565 [2024-10-07 11:31:48.649709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.565 [2024-10-07 11:31:48.649741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.565 [2024-10-07 11:31:48.649760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.565 [2024-10-07 11:31:48.649774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.565 [2024-10-07 11:31:48.649805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.565 [2024-10-07 11:31:48.656782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.565 [2024-10-07 11:31:48.656896] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.565 [2024-10-07 11:31:48.656927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.565 [2024-10-07 11:31:48.656945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.565 [2024-10-07 11:31:48.656977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.565 [2024-10-07 11:31:48.657009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.565 [2024-10-07 11:31:48.657027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.565 [2024-10-07 11:31:48.657042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.565 [2024-10-07 11:31:48.657072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.565 [2024-10-07 11:31:48.660096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.565 [2024-10-07 11:31:48.660207] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.565 [2024-10-07 11:31:48.660238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.565 [2024-10-07 11:31:48.660256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.565 [2024-10-07 11:31:48.660288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.565 [2024-10-07 11:31:48.660336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.565 [2024-10-07 11:31:48.660357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.565 [2024-10-07 11:31:48.660372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.565 [2024-10-07 11:31:48.660402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.565 [2024-10-07 11:31:48.667075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.565 [2024-10-07 11:31:48.667188] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.565 [2024-10-07 11:31:48.667220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.565 [2024-10-07 11:31:48.667237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.565 [2024-10-07 11:31:48.667508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.565 [2024-10-07 11:31:48.667702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.565 [2024-10-07 11:31:48.667734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.566 [2024-10-07 11:31:48.667767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.566 [2024-10-07 11:31:48.667877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.566 [2024-10-07 11:31:48.670588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.566 [2024-10-07 11:31:48.670699] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.566 [2024-10-07 11:31:48.670730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.566 [2024-10-07 11:31:48.670747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.566 [2024-10-07 11:31:48.670780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.566 [2024-10-07 11:31:48.670812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.566 [2024-10-07 11:31:48.670830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.566 [2024-10-07 11:31:48.670844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.566 [2024-10-07 11:31:48.670874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.566 [2024-10-07 11:31:48.677163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.566 [2024-10-07 11:31:48.677275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.566 [2024-10-07 11:31:48.677307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.566 [2024-10-07 11:31:48.677339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.566 [2024-10-07 11:31:48.677373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.566 [2024-10-07 11:31:48.677405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.566 [2024-10-07 11:31:48.677423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.566 [2024-10-07 11:31:48.677437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.566 [2024-10-07 11:31:48.678158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.566 [2024-10-07 11:31:48.680815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.566 [2024-10-07 11:31:48.680926] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.566 [2024-10-07 11:31:48.680957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.566 [2024-10-07 11:31:48.680974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.566 [2024-10-07 11:31:48.681225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.566 [2024-10-07 11:31:48.681389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.566 [2024-10-07 11:31:48.681446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.566 [2024-10-07 11:31:48.681465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.566 [2024-10-07 11:31:48.681575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.566 [2024-10-07 11:31:48.687610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.566 [2024-10-07 11:31:48.687746] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.566 [2024-10-07 11:31:48.687778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.566 [2024-10-07 11:31:48.687796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.566 [2024-10-07 11:31:48.687829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.566 [2024-10-07 11:31:48.687862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.566 [2024-10-07 11:31:48.687880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.566 [2024-10-07 11:31:48.687894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.566 [2024-10-07 11:31:48.687925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.566 [2024-10-07 11:31:48.690909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.566 [2024-10-07 11:31:48.691021] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.566 [2024-10-07 11:31:48.691052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.566 [2024-10-07 11:31:48.691069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.566 [2024-10-07 11:31:48.691101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.566 [2024-10-07 11:31:48.691134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.566 [2024-10-07 11:31:48.691152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.566 [2024-10-07 11:31:48.691166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.566 [2024-10-07 11:31:48.691905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.566 [2024-10-07 11:31:48.698009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.566 [2024-10-07 11:31:48.698122] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.566 [2024-10-07 11:31:48.698153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.566 [2024-10-07 11:31:48.698170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.566 [2024-10-07 11:31:48.698203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.566 [2024-10-07 11:31:48.698235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.566 [2024-10-07 11:31:48.698253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.566 [2024-10-07 11:31:48.698267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.566 [2024-10-07 11:31:48.698311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.566 [2024-10-07 11:31:48.701281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.566 [2024-10-07 11:31:48.701402] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.566 [2024-10-07 11:31:48.701434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.566 [2024-10-07 11:31:48.701451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.566 [2024-10-07 11:31:48.701483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.566 [2024-10-07 11:31:48.701531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.566 [2024-10-07 11:31:48.701551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.566 [2024-10-07 11:31:48.701565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.566 [2024-10-07 11:31:48.701595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.566 [2024-10-07 11:31:48.708187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.566 [2024-10-07 11:31:48.708301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.566 [2024-10-07 11:31:48.708347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.566 [2024-10-07 11:31:48.708366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.566 [2024-10-07 11:31:48.708619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.566 [2024-10-07 11:31:48.708813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.566 [2024-10-07 11:31:48.708849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.566 [2024-10-07 11:31:48.708866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.566 [2024-10-07 11:31:48.708974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.566 [2024-10-07 11:31:48.711695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.566 [2024-10-07 11:31:48.711806] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.566 [2024-10-07 11:31:48.711837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.566 [2024-10-07 11:31:48.711854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.566 [2024-10-07 11:31:48.711885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.566 [2024-10-07 11:31:48.711917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.566 [2024-10-07 11:31:48.711935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.566 [2024-10-07 11:31:48.711950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.566 [2024-10-07 11:31:48.711979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.566 [2024-10-07 11:31:48.718276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.566 [2024-10-07 11:31:48.718411] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.566 [2024-10-07 11:31:48.718444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.566 [2024-10-07 11:31:48.718461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.566 [2024-10-07 11:31:48.718493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.566 [2024-10-07 11:31:48.718525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.566 [2024-10-07 11:31:48.718543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.566 [2024-10-07 11:31:48.718558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.566 [2024-10-07 11:31:48.719298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.566 [2024-10-07 11:31:48.721917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.566 [2024-10-07 11:31:48.722028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.566 [2024-10-07 11:31:48.722059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.566 [2024-10-07 11:31:48.722076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.567 [2024-10-07 11:31:48.722374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.567 [2024-10-07 11:31:48.722526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.567 [2024-10-07 11:31:48.722561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.567 [2024-10-07 11:31:48.722579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.567 [2024-10-07 11:31:48.722687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.567 [2024-10-07 11:31:48.728764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.567 [2024-10-07 11:31:48.728891] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.567 [2024-10-07 11:31:48.728922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.567 [2024-10-07 11:31:48.728939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.567 [2024-10-07 11:31:48.728971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.567 [2024-10-07 11:31:48.729004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.567 [2024-10-07 11:31:48.729022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.567 [2024-10-07 11:31:48.729036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.567 [2024-10-07 11:31:48.729066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.567 [2024-10-07 11:31:48.732000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.567 [2024-10-07 11:31:48.732118] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.567 [2024-10-07 11:31:48.732149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.567 [2024-10-07 11:31:48.732166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.567 [2024-10-07 11:31:48.732197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.567 [2024-10-07 11:31:48.732229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.567 [2024-10-07 11:31:48.732247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.567 [2024-10-07 11:31:48.732261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.567 [2024-10-07 11:31:48.732292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.567 [2024-10-07 11:31:48.739235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.567 [2024-10-07 11:31:48.739367] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.567 [2024-10-07 11:31:48.739399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.567 [2024-10-07 11:31:48.739433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.567 [2024-10-07 11:31:48.739467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.567 [2024-10-07 11:31:48.739500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.567 [2024-10-07 11:31:48.739519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.567 [2024-10-07 11:31:48.739533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.567 [2024-10-07 11:31:48.739564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.567 [2024-10-07 11:31:48.742561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.567 [2024-10-07 11:31:48.742675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.567 [2024-10-07 11:31:48.742705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.567 [2024-10-07 11:31:48.742723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.567 [2024-10-07 11:31:48.742754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.567 [2024-10-07 11:31:48.742786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.567 [2024-10-07 11:31:48.742804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.567 [2024-10-07 11:31:48.742819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.567 [2024-10-07 11:31:48.742849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.567 [2024-10-07 11:31:48.749509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.567 [2024-10-07 11:31:48.749620] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.567 [2024-10-07 11:31:48.749652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.567 [2024-10-07 11:31:48.749669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.567 [2024-10-07 11:31:48.749920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.567 [2024-10-07 11:31:48.750073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.567 [2024-10-07 11:31:48.750111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.567 [2024-10-07 11:31:48.750129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.567 [2024-10-07 11:31:48.750238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.567 [2024-10-07 11:31:48.753046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.567 [2024-10-07 11:31:48.753158] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.567 [2024-10-07 11:31:48.753190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.567 [2024-10-07 11:31:48.753207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.567 [2024-10-07 11:31:48.753238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.567 [2024-10-07 11:31:48.753270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.567 [2024-10-07 11:31:48.753305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.567 [2024-10-07 11:31:48.753337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.567 [2024-10-07 11:31:48.753371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.567 [2024-10-07 11:31:48.759603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.567 [2024-10-07 11:31:48.759717] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.567 [2024-10-07 11:31:48.759749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.567 [2024-10-07 11:31:48.759767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.567 [2024-10-07 11:31:48.759798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.567 [2024-10-07 11:31:48.759830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.567 [2024-10-07 11:31:48.759848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.567 [2024-10-07 11:31:48.759863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.567 [2024-10-07 11:31:48.759893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.567 [2024-10-07 11:31:48.763312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.567 [2024-10-07 11:31:48.763443] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.567 [2024-10-07 11:31:48.763475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.567 [2024-10-07 11:31:48.763492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.567 [2024-10-07 11:31:48.763744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.567 [2024-10-07 11:31:48.763891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.567 [2024-10-07 11:31:48.763926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.567 [2024-10-07 11:31:48.763943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.567 [2024-10-07 11:31:48.764050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.567 [2024-10-07 11:31:48.770123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.567 [2024-10-07 11:31:48.770235] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.567 [2024-10-07 11:31:48.770267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.567 [2024-10-07 11:31:48.770297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.567 [2024-10-07 11:31:48.770346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.567 [2024-10-07 11:31:48.770381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.567 [2024-10-07 11:31:48.770400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.567 [2024-10-07 11:31:48.770414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.567 [2024-10-07 11:31:48.770444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.567 [2024-10-07 11:31:48.773418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.567 [2024-10-07 11:31:48.773528] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.567 [2024-10-07 11:31:48.773559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.567 [2024-10-07 11:31:48.773576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.567 [2024-10-07 11:31:48.773607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.567 [2024-10-07 11:31:48.773640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.567 [2024-10-07 11:31:48.773658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.567 [2024-10-07 11:31:48.773672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.567 [2024-10-07 11:31:48.773702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.567 [2024-10-07 11:31:48.780975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.567 [2024-10-07 11:31:48.781085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.568 [2024-10-07 11:31:48.781117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.568 [2024-10-07 11:31:48.781134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.568 [2024-10-07 11:31:48.781165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.568 [2024-10-07 11:31:48.781197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.568 [2024-10-07 11:31:48.781215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.568 [2024-10-07 11:31:48.781230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.568 [2024-10-07 11:31:48.781260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.568 [2024-10-07 11:31:48.784359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.568 [2024-10-07 11:31:48.784470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.568 [2024-10-07 11:31:48.784501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.568 [2024-10-07 11:31:48.784519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.568 [2024-10-07 11:31:48.784550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.568 [2024-10-07 11:31:48.784582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.568 [2024-10-07 11:31:48.784601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.568 [2024-10-07 11:31:48.784615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.568 [2024-10-07 11:31:48.784645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.568 [2024-10-07 11:31:48.791461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.568 [2024-10-07 11:31:48.791583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.568 [2024-10-07 11:31:48.791614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.568 [2024-10-07 11:31:48.791631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.568 [2024-10-07 11:31:48.791902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.568 [2024-10-07 11:31:48.792062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.568 [2024-10-07 11:31:48.792097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.568 [2024-10-07 11:31:48.792114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.568 [2024-10-07 11:31:48.792227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.568 [2024-10-07 11:31:48.795054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.568 [2024-10-07 11:31:48.795165] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.568 [2024-10-07 11:31:48.795196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.568 [2024-10-07 11:31:48.795214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.568 [2024-10-07 11:31:48.795246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.568 [2024-10-07 11:31:48.795278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.568 [2024-10-07 11:31:48.795296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.568 [2024-10-07 11:31:48.795310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.568 [2024-10-07 11:31:48.795357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.568 [2024-10-07 11:31:48.801620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.568 [2024-10-07 11:31:48.801731] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.568 [2024-10-07 11:31:48.801762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.568 [2024-10-07 11:31:48.801780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.568 [2024-10-07 11:31:48.801811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.568 [2024-10-07 11:31:48.801843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.568 [2024-10-07 11:31:48.801862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.568 [2024-10-07 11:31:48.801876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.568 [2024-10-07 11:31:48.801907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.568 [2024-10-07 11:31:48.805491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.568 [2024-10-07 11:31:48.805605] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.568 [2024-10-07 11:31:48.805636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.568 [2024-10-07 11:31:48.805654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.568 [2024-10-07 11:31:48.805905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.568 [2024-10-07 11:31:48.806052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.568 [2024-10-07 11:31:48.806087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.568 [2024-10-07 11:31:48.806120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.568 [2024-10-07 11:31:48.806229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.568 [2024-10-07 11:31:48.812376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.568 [2024-10-07 11:31:48.812490] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.568 [2024-10-07 11:31:48.812521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.568 [2024-10-07 11:31:48.812539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.568 [2024-10-07 11:31:48.812571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.568 [2024-10-07 11:31:48.812603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.568 [2024-10-07 11:31:48.812621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.568 [2024-10-07 11:31:48.812635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.568 [2024-10-07 11:31:48.812665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.568 [2024-10-07 11:31:48.815582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.568 [2024-10-07 11:31:48.815692] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.568 [2024-10-07 11:31:48.815724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.568 [2024-10-07 11:31:48.815741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.568 [2024-10-07 11:31:48.815773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.568 [2024-10-07 11:31:48.815805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.568 [2024-10-07 11:31:48.815823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.568 [2024-10-07 11:31:48.815837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.568 [2024-10-07 11:31:48.815867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.568 [2024-10-07 11:31:48.822972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.568 [2024-10-07 11:31:48.823087] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.568 [2024-10-07 11:31:48.823118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.568 [2024-10-07 11:31:48.823136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.568 [2024-10-07 11:31:48.823168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.568 [2024-10-07 11:31:48.823200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.568 [2024-10-07 11:31:48.823218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.568 [2024-10-07 11:31:48.823232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.568 [2024-10-07 11:31:48.823262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.568 [2024-10-07 11:31:48.826304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.568 [2024-10-07 11:31:48.826455] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.568 [2024-10-07 11:31:48.826486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.568 [2024-10-07 11:31:48.826504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.568 [2024-10-07 11:31:48.826535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.568 [2024-10-07 11:31:48.826567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.568 [2024-10-07 11:31:48.826585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.568 [2024-10-07 11:31:48.826600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.568 [2024-10-07 11:31:48.826629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.568 [2024-10-07 11:31:48.833276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.568 [2024-10-07 11:31:48.833399] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.568 [2024-10-07 11:31:48.833432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.568 [2024-10-07 11:31:48.833449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.568 [2024-10-07 11:31:48.833701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.568 [2024-10-07 11:31:48.833847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.568 [2024-10-07 11:31:48.833872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.568 [2024-10-07 11:31:48.833887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.568 [2024-10-07 11:31:48.833993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.569 [2024-10-07 11:31:48.836898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.569 [2024-10-07 11:31:48.837008] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.569 [2024-10-07 11:31:48.837039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.569 [2024-10-07 11:31:48.837056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.569 [2024-10-07 11:31:48.837088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.569 [2024-10-07 11:31:48.837120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.569 [2024-10-07 11:31:48.837138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.569 [2024-10-07 11:31:48.837152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.569 [2024-10-07 11:31:48.837182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.569 [2024-10-07 11:31:48.843379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.569 [2024-10-07 11:31:48.843493] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.569 [2024-10-07 11:31:48.843525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.569 [2024-10-07 11:31:48.843542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.569 [2024-10-07 11:31:48.843573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.569 [2024-10-07 11:31:48.843625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.569 [2024-10-07 11:31:48.843644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.569 [2024-10-07 11:31:48.843658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.569 [2024-10-07 11:31:48.843689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.569 [2024-10-07 11:31:48.847103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.569 [2024-10-07 11:31:48.847215] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.569 [2024-10-07 11:31:48.847246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.569 [2024-10-07 11:31:48.847264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.569 [2024-10-07 11:31:48.847529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.569 [2024-10-07 11:31:48.847670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.569 [2024-10-07 11:31:48.847695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.569 [2024-10-07 11:31:48.847709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.569 [2024-10-07 11:31:48.847815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.569 [2024-10-07 11:31:48.853962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.569 [2024-10-07 11:31:48.854076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.569 [2024-10-07 11:31:48.854108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.569 [2024-10-07 11:31:48.854125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.569 [2024-10-07 11:31:48.854157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.569 [2024-10-07 11:31:48.854188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.569 [2024-10-07 11:31:48.854206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.569 [2024-10-07 11:31:48.854220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.569 [2024-10-07 11:31:48.854251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.569 [2024-10-07 11:31:48.857191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.569 [2024-10-07 11:31:48.857299] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.569 [2024-10-07 11:31:48.857343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.569 [2024-10-07 11:31:48.857362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.569 [2024-10-07 11:31:48.857394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.569 [2024-10-07 11:31:48.857426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.569 [2024-10-07 11:31:48.857444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.569 [2024-10-07 11:31:48.857458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.569 [2024-10-07 11:31:48.857507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.569 [2024-10-07 11:31:48.864419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.569 [2024-10-07 11:31:48.864535] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.569 [2024-10-07 11:31:48.864566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.569 [2024-10-07 11:31:48.864583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.569 [2024-10-07 11:31:48.864615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.569 [2024-10-07 11:31:48.864648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.569 [2024-10-07 11:31:48.864665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.569 [2024-10-07 11:31:48.864679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.569 [2024-10-07 11:31:48.864710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.569 [2024-10-07 11:31:48.867687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.569 [2024-10-07 11:31:48.867799] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.569 [2024-10-07 11:31:48.867830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.569 [2024-10-07 11:31:48.867847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.569 [2024-10-07 11:31:48.867879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.569 [2024-10-07 11:31:48.867911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.569 [2024-10-07 11:31:48.867930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.569 [2024-10-07 11:31:48.867944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.569 [2024-10-07 11:31:48.867974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.569 [2024-10-07 11:31:48.874552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.569 [2024-10-07 11:31:48.874664] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.569 [2024-10-07 11:31:48.874696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.569 [2024-10-07 11:31:48.874714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.569 [2024-10-07 11:31:48.874965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.569 [2024-10-07 11:31:48.875116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.569 [2024-10-07 11:31:48.875152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.569 [2024-10-07 11:31:48.875169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.569 [2024-10-07 11:31:48.875277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.569 [2024-10-07 11:31:48.878099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.569 [2024-10-07 11:31:48.878207] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.569 [2024-10-07 11:31:48.878238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.569 [2024-10-07 11:31:48.878273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.569 [2024-10-07 11:31:48.878332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.569 [2024-10-07 11:31:48.878370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.569 [2024-10-07 11:31:48.878388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.569 [2024-10-07 11:31:48.878402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.569 [2024-10-07 11:31:48.878433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.569 [2024-10-07 11:31:48.884643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.569 [2024-10-07 11:31:48.884754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.569 [2024-10-07 11:31:48.884785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.569 [2024-10-07 11:31:48.884803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.569 [2024-10-07 11:31:48.884835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.570 [2024-10-07 11:31:48.884867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.570 [2024-10-07 11:31:48.884884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.570 [2024-10-07 11:31:48.884899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.570 [2024-10-07 11:31:48.884931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.570 [2024-10-07 11:31:48.888303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.570 [2024-10-07 11:31:48.888427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.570 [2024-10-07 11:31:48.888457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.570 [2024-10-07 11:31:48.888475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.570 [2024-10-07 11:31:48.888726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.570 [2024-10-07 11:31:48.888859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.570 [2024-10-07 11:31:48.888893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.570 [2024-10-07 11:31:48.888911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.570 [2024-10-07 11:31:48.889017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.570 [2024-10-07 11:31:48.895180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.570 [2024-10-07 11:31:48.895293] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.570 [2024-10-07 11:31:48.895339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.570 [2024-10-07 11:31:48.895359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.570 [2024-10-07 11:31:48.895392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.570 [2024-10-07 11:31:48.895424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.570 [2024-10-07 11:31:48.895462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.570 [2024-10-07 11:31:48.895478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.570 [2024-10-07 11:31:48.895510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.570 [2024-10-07 11:31:48.898399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.570 [2024-10-07 11:31:48.898509] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.570 [2024-10-07 11:31:48.898540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.570 [2024-10-07 11:31:48.898557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.570 [2024-10-07 11:31:48.898589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.570 [2024-10-07 11:31:48.898621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.570 [2024-10-07 11:31:48.898643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.570 [2024-10-07 11:31:48.898658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.570 [2024-10-07 11:31:48.898688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.570 [2024-10-07 11:31:48.905693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.570 [2024-10-07 11:31:48.905819] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.570 [2024-10-07 11:31:48.905851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.570 [2024-10-07 11:31:48.905869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.570 [2024-10-07 11:31:48.905901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.570 [2024-10-07 11:31:48.905932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.570 [2024-10-07 11:31:48.905950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.570 [2024-10-07 11:31:48.905964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.570 [2024-10-07 11:31:48.905995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.570 [2024-10-07 11:31:48.908953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.570 [2024-10-07 11:31:48.909064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.570 [2024-10-07 11:31:48.909096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.570 [2024-10-07 11:31:48.909113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.570 [2024-10-07 11:31:48.909145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.570 [2024-10-07 11:31:48.909177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.570 [2024-10-07 11:31:48.909194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.570 [2024-10-07 11:31:48.909209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.570 [2024-10-07 11:31:48.909239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.570 [2024-10-07 11:31:48.915879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.570 [2024-10-07 11:31:48.915994] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.570 [2024-10-07 11:31:48.916026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.570 [2024-10-07 11:31:48.916043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.570 [2024-10-07 11:31:48.916295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.570 [2024-10-07 11:31:48.916458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.570 [2024-10-07 11:31:48.916494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.570 [2024-10-07 11:31:48.916512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.570 [2024-10-07 11:31:48.916620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.570 [2024-10-07 11:31:48.919422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.570 [2024-10-07 11:31:48.919534] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.570 [2024-10-07 11:31:48.919565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.570 [2024-10-07 11:31:48.919583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.570 [2024-10-07 11:31:48.919614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.570 [2024-10-07 11:31:48.919646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.570 [2024-10-07 11:31:48.919665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.570 [2024-10-07 11:31:48.919679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.570 [2024-10-07 11:31:48.919708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.570 [2024-10-07 11:31:48.925968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.570 [2024-10-07 11:31:48.926080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.570 [2024-10-07 11:31:48.926111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.570 [2024-10-07 11:31:48.926128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.570 [2024-10-07 11:31:48.926159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.570 [2024-10-07 11:31:48.926192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.570 [2024-10-07 11:31:48.926209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.570 [2024-10-07 11:31:48.926223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.570 [2024-10-07 11:31:48.926976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.570 [2024-10-07 11:31:48.929603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.570 [2024-10-07 11:31:48.929711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.570 [2024-10-07 11:31:48.929741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.570 [2024-10-07 11:31:48.929759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.570 [2024-10-07 11:31:48.930045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.570 [2024-10-07 11:31:48.930193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.570 [2024-10-07 11:31:48.930219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.570 [2024-10-07 11:31:48.930234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.570 [2024-10-07 11:31:48.930387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.570 [2024-10-07 11:31:48.936428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.570 [2024-10-07 11:31:48.936542] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.570 [2024-10-07 11:31:48.936574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.570 [2024-10-07 11:31:48.936591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.570 [2024-10-07 11:31:48.936624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.570 [2024-10-07 11:31:48.936656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.570 [2024-10-07 11:31:48.936674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.570 [2024-10-07 11:31:48.936689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.570 [2024-10-07 11:31:48.936718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.570 [2024-10-07 11:31:48.939687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.570 [2024-10-07 11:31:48.939798] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.570 [2024-10-07 11:31:48.939829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.570 [2024-10-07 11:31:48.939847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.571 [2024-10-07 11:31:48.939879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.571 [2024-10-07 11:31:48.939911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.571 [2024-10-07 11:31:48.939929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.571 [2024-10-07 11:31:48.939943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.571 [2024-10-07 11:31:48.939973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.571 [2024-10-07 11:31:48.946874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.571 [2024-10-07 11:31:48.946987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.571 [2024-10-07 11:31:48.947018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.571 [2024-10-07 11:31:48.947036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.571 [2024-10-07 11:31:48.947069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.571 [2024-10-07 11:31:48.947101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.571 [2024-10-07 11:31:48.947119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.571 [2024-10-07 11:31:48.947154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.571 [2024-10-07 11:31:48.947188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.571 [2024-10-07 11:31:48.950148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.571 [2024-10-07 11:31:48.950258] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.571 [2024-10-07 11:31:48.950302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.571 [2024-10-07 11:31:48.950337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.571 [2024-10-07 11:31:48.950374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.571 [2024-10-07 11:31:48.950407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.571 [2024-10-07 11:31:48.950426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.571 [2024-10-07 11:31:48.950440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.571 [2024-10-07 11:31:48.950471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.571 [2024-10-07 11:31:48.957089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.571 [2024-10-07 11:31:48.957202] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.571 [2024-10-07 11:31:48.957234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.571 [2024-10-07 11:31:48.957252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.571 [2024-10-07 11:31:48.957519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.571 [2024-10-07 11:31:48.957687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.571 [2024-10-07 11:31:48.957721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.571 [2024-10-07 11:31:48.957738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.571 [2024-10-07 11:31:48.957846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.571 [2024-10-07 11:31:48.960617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.571 [2024-10-07 11:31:48.960734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.571 [2024-10-07 11:31:48.960766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.571 [2024-10-07 11:31:48.960783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.571 [2024-10-07 11:31:48.960815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.571 [2024-10-07 11:31:48.960847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.571 [2024-10-07 11:31:48.960865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.571 [2024-10-07 11:31:48.960880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.571 [2024-10-07 11:31:48.960910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.571 [2024-10-07 11:31:48.967181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.571 [2024-10-07 11:31:48.967311] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.571 [2024-10-07 11:31:48.967356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.571 [2024-10-07 11:31:48.967374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.571 [2024-10-07 11:31:48.967407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.571 [2024-10-07 11:31:48.967440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.571 [2024-10-07 11:31:48.967458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.571 [2024-10-07 11:31:48.967473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.571 [2024-10-07 11:31:48.968195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.571 [2024-10-07 11:31:48.970854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.571 [2024-10-07 11:31:48.970965] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.571 [2024-10-07 11:31:48.970997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.571 [2024-10-07 11:31:48.971014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.571 [2024-10-07 11:31:48.971266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.571 [2024-10-07 11:31:48.971438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.571 [2024-10-07 11:31:48.971473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.571 [2024-10-07 11:31:48.971491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.571 [2024-10-07 11:31:48.971598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.571 [2024-10-07 11:31:48.977631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.571 [2024-10-07 11:31:48.977752] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.571 [2024-10-07 11:31:48.977784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.571 [2024-10-07 11:31:48.977801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.571 [2024-10-07 11:31:48.977833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.571 [2024-10-07 11:31:48.977866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.571 [2024-10-07 11:31:48.977883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.571 [2024-10-07 11:31:48.977897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.571 [2024-10-07 11:31:48.977928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.571 [2024-10-07 11:31:48.980939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.571 [2024-10-07 11:31:48.981054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.571 [2024-10-07 11:31:48.981084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.571 [2024-10-07 11:31:48.981102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.571 [2024-10-07 11:31:48.981151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.571 [2024-10-07 11:31:48.981184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.571 [2024-10-07 11:31:48.981203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.571 [2024-10-07 11:31:48.981217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.571 [2024-10-07 11:31:48.981963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.571 [2024-10-07 11:31:48.988074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.571 [2024-10-07 11:31:48.988185] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.571 [2024-10-07 11:31:48.988217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.571 [2024-10-07 11:31:48.988234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.571 [2024-10-07 11:31:48.988266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.571 [2024-10-07 11:31:48.988297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.571 [2024-10-07 11:31:48.988328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.571 [2024-10-07 11:31:48.988346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.571 [2024-10-07 11:31:48.988378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.571 [2024-10-07 11:31:48.991390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.571 [2024-10-07 11:31:48.991500] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.571 [2024-10-07 11:31:48.991531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.571 [2024-10-07 11:31:48.991548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.571 [2024-10-07 11:31:48.991580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.571 [2024-10-07 11:31:48.991611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.571 [2024-10-07 11:31:48.991630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.571 [2024-10-07 11:31:48.991644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.571 [2024-10-07 11:31:48.991675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.571 [2024-10-07 11:31:48.998311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.571 [2024-10-07 11:31:48.998441] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.572 [2024-10-07 11:31:48.998473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.572 [2024-10-07 11:31:48.998490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.572 [2024-10-07 11:31:48.998742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.572 [2024-10-07 11:31:48.998889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.572 [2024-10-07 11:31:48.998925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.572 [2024-10-07 11:31:48.998943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.572 [2024-10-07 11:31:48.999070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.572 [2024-10-07 11:31:49.001839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.572 [2024-10-07 11:31:49.001963] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.572 [2024-10-07 11:31:49.001994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.572 [2024-10-07 11:31:49.002011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.572 [2024-10-07 11:31:49.002043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.572 [2024-10-07 11:31:49.002075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.572 [2024-10-07 11:31:49.002094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.572 [2024-10-07 11:31:49.002108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.572 [2024-10-07 11:31:49.002138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.572 [2024-10-07 11:31:49.008420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.572 [2024-10-07 11:31:49.008539] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.572 [2024-10-07 11:31:49.008572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.572 [2024-10-07 11:31:49.008589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.572 [2024-10-07 11:31:49.008621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.572 [2024-10-07 11:31:49.009365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.572 [2024-10-07 11:31:49.009401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.572 [2024-10-07 11:31:49.009418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.572 [2024-10-07 11:31:49.009590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.572 [2024-10-07 11:31:49.012009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.572 [2024-10-07 11:31:49.012122] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.572 [2024-10-07 11:31:49.012154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.572 [2024-10-07 11:31:49.012171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.572 [2024-10-07 11:31:49.012442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.572 [2024-10-07 11:31:49.012610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.572 [2024-10-07 11:31:49.012636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.572 [2024-10-07 11:31:49.012651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.572 [2024-10-07 11:31:49.012757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.572 [2024-10-07 11:31:49.018791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.572 [2024-10-07 11:31:49.018909] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.572 [2024-10-07 11:31:49.018960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.572 [2024-10-07 11:31:49.018979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.572 [2024-10-07 11:31:49.019012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.572 [2024-10-07 11:31:49.019045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.572 [2024-10-07 11:31:49.019063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.572 [2024-10-07 11:31:49.019077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.572 [2024-10-07 11:31:49.019108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.572 [2024-10-07 11:31:49.022100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.572 [2024-10-07 11:31:49.022209] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.572 [2024-10-07 11:31:49.022240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.572 [2024-10-07 11:31:49.022257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.572 [2024-10-07 11:31:49.022300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.572 [2024-10-07 11:31:49.022352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.572 [2024-10-07 11:31:49.022372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.572 [2024-10-07 11:31:49.022387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.572 [2024-10-07 11:31:49.023109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.572 [2024-10-07 11:31:49.029249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.572 [2024-10-07 11:31:49.029375] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.572 [2024-10-07 11:31:49.029408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.572 [2024-10-07 11:31:49.029425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.572 [2024-10-07 11:31:49.029458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.572 [2024-10-07 11:31:49.029490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.572 [2024-10-07 11:31:49.029508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.572 [2024-10-07 11:31:49.029522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.572 [2024-10-07 11:31:49.029553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.572 [2024-10-07 11:31:49.032568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.572 [2024-10-07 11:31:49.032680] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.572 [2024-10-07 11:31:49.032711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.572 [2024-10-07 11:31:49.032729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.572 [2024-10-07 11:31:49.032760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.572 [2024-10-07 11:31:49.032810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.572 [2024-10-07 11:31:49.032831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.572 [2024-10-07 11:31:49.032845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.572 [2024-10-07 11:31:49.032876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.572 [2024-10-07 11:31:49.039491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.572 [2024-10-07 11:31:49.039607] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.572 [2024-10-07 11:31:49.039638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.572 [2024-10-07 11:31:49.039655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.572 [2024-10-07 11:31:49.039907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.572 [2024-10-07 11:31:49.040070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.572 [2024-10-07 11:31:49.040104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.572 [2024-10-07 11:31:49.040121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.572 [2024-10-07 11:31:49.040229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.572 [2024-10-07 11:31:49.043053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.572 [2024-10-07 11:31:49.043164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.572 [2024-10-07 11:31:49.043195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.572 [2024-10-07 11:31:49.043212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.572 [2024-10-07 11:31:49.043244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.572 [2024-10-07 11:31:49.043276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.572 [2024-10-07 11:31:49.043294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.573 [2024-10-07 11:31:49.043308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.573 [2024-10-07 11:31:49.043353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.573 [2024-10-07 11:31:49.049586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.573 [2024-10-07 11:31:49.049697] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.573 [2024-10-07 11:31:49.049728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.573 [2024-10-07 11:31:49.049746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.573 [2024-10-07 11:31:49.049778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.573 [2024-10-07 11:31:49.049810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.573 [2024-10-07 11:31:49.049827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.573 [2024-10-07 11:31:49.049841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.573 [2024-10-07 11:31:49.049872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.573 [2024-10-07 11:31:49.053267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.573 [2024-10-07 11:31:49.053391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.573 [2024-10-07 11:31:49.053422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.573 [2024-10-07 11:31:49.053440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.573 [2024-10-07 11:31:49.053691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.573 [2024-10-07 11:31:49.053841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.573 [2024-10-07 11:31:49.053874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.573 [2024-10-07 11:31:49.053891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.573 [2024-10-07 11:31:49.053998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.573 [2024-10-07 11:31:49.060112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.573 [2024-10-07 11:31:49.060226] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.573 [2024-10-07 11:31:49.060259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.573 [2024-10-07 11:31:49.060276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.573 [2024-10-07 11:31:49.060308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.573 [2024-10-07 11:31:49.060356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.573 [2024-10-07 11:31:49.060377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.573 [2024-10-07 11:31:49.060392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.573 [2024-10-07 11:31:49.060422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.573 [2024-10-07 11:31:49.063367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.573 [2024-10-07 11:31:49.063477] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.573 [2024-10-07 11:31:49.063508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.573 [2024-10-07 11:31:49.063525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.573 [2024-10-07 11:31:49.063557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.573 [2024-10-07 11:31:49.063588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.573 [2024-10-07 11:31:49.063606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.573 [2024-10-07 11:31:49.063621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.573 [2024-10-07 11:31:49.063650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.573 [2024-10-07 11:31:49.070651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.573 [2024-10-07 11:31:49.070767] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.573 [2024-10-07 11:31:49.070798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.573 [2024-10-07 11:31:49.070835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.573 [2024-10-07 11:31:49.070870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.573 [2024-10-07 11:31:49.070903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.573 [2024-10-07 11:31:49.070921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.573 [2024-10-07 11:31:49.070935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.573 [2024-10-07 11:31:49.070965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.573 [2024-10-07 11:31:49.073922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.573 [2024-10-07 11:31:49.074031] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.573 [2024-10-07 11:31:49.074062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.573 [2024-10-07 11:31:49.074079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.573 [2024-10-07 11:31:49.074110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.573 [2024-10-07 11:31:49.074142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.573 [2024-10-07 11:31:49.074160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.573 [2024-10-07 11:31:49.074175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.573 [2024-10-07 11:31:49.074204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.573 [2024-10-07 11:31:49.080822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.573 [2024-10-07 11:31:49.080937] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.573 [2024-10-07 11:31:49.080968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.573 [2024-10-07 11:31:49.080986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.573 [2024-10-07 11:31:49.081238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.573 [2024-10-07 11:31:49.081424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.573 [2024-10-07 11:31:49.081457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.573 [2024-10-07 11:31:49.081474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.573 [2024-10-07 11:31:49.081607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.573 [2024-10-07 11:31:49.084411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.573 [2024-10-07 11:31:49.084520] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.573 [2024-10-07 11:31:49.084551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.573 [2024-10-07 11:31:49.084568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.573 [2024-10-07 11:31:49.084600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.573 [2024-10-07 11:31:49.084632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.573 [2024-10-07 11:31:49.084650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.573 [2024-10-07 11:31:49.084681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.573 [2024-10-07 11:31:49.084714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.573 [2024-10-07 11:31:49.090922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.573 [2024-10-07 11:31:49.091035] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.573 [2024-10-07 11:31:49.091067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.573 [2024-10-07 11:31:49.091084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.573 [2024-10-07 11:31:49.091116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.573 [2024-10-07 11:31:49.091147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.573 [2024-10-07 11:31:49.091165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.573 [2024-10-07 11:31:49.091179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.573 [2024-10-07 11:31:49.091210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.574 [2024-10-07 11:31:49.094615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.574 [2024-10-07 11:31:49.094728] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.574 [2024-10-07 11:31:49.094763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.574 [2024-10-07 11:31:49.094781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.574 [2024-10-07 11:31:49.095048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.574 [2024-10-07 11:31:49.095191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.574 [2024-10-07 11:31:49.095224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.574 [2024-10-07 11:31:49.095241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.574 [2024-10-07 11:31:49.095395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.574 [2024-10-07 11:31:49.101484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.574 [2024-10-07 11:31:49.101603] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.574 [2024-10-07 11:31:49.101634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.574 [2024-10-07 11:31:49.101652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.574 [2024-10-07 11:31:49.101684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.574 [2024-10-07 11:31:49.101715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.574 [2024-10-07 11:31:49.101733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.574 [2024-10-07 11:31:49.101747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.574 [2024-10-07 11:31:49.101778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.574 [2024-10-07 11:31:49.104705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.574 [2024-10-07 11:31:49.104834] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.574 [2024-10-07 11:31:49.104866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.574 [2024-10-07 11:31:49.104884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.574 [2024-10-07 11:31:49.104916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.574 [2024-10-07 11:31:49.104948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.574 [2024-10-07 11:31:49.104966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.574 [2024-10-07 11:31:49.104980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.574 [2024-10-07 11:31:49.105009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.574 [2024-10-07 11:31:49.112005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.574 [2024-10-07 11:31:49.112117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.574 [2024-10-07 11:31:49.112148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.574 [2024-10-07 11:31:49.112166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.574 [2024-10-07 11:31:49.112198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.574 [2024-10-07 11:31:49.112230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.574 [2024-10-07 11:31:49.112248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.574 [2024-10-07 11:31:49.112262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.574 [2024-10-07 11:31:49.112292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.574 [2024-10-07 11:31:49.115289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.574 [2024-10-07 11:31:49.115421] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.574 [2024-10-07 11:31:49.115452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.574 [2024-10-07 11:31:49.115470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.574 [2024-10-07 11:31:49.115502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.574 [2024-10-07 11:31:49.115534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.574 [2024-10-07 11:31:49.115552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.574 [2024-10-07 11:31:49.115567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.574 [2024-10-07 11:31:49.115597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.574 [2024-10-07 11:31:49.122156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.574 [2024-10-07 11:31:49.122268] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.574 [2024-10-07 11:31:49.122313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.574 [2024-10-07 11:31:49.122349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.574 [2024-10-07 11:31:49.122620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.574 [2024-10-07 11:31:49.122755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.574 [2024-10-07 11:31:49.122789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.574 [2024-10-07 11:31:49.122806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.574 [2024-10-07 11:31:49.122914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.574 [2024-10-07 11:31:49.125767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.574 [2024-10-07 11:31:49.125876] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.574 [2024-10-07 11:31:49.125908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.574 [2024-10-07 11:31:49.125925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.574 [2024-10-07 11:31:49.125956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.574 [2024-10-07 11:31:49.125988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.574 [2024-10-07 11:31:49.126006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.574 [2024-10-07 11:31:49.126020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.574 [2024-10-07 11:31:49.126050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.574 [2024-10-07 11:31:49.132248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.574 [2024-10-07 11:31:49.132375] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.574 [2024-10-07 11:31:49.132407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.574 [2024-10-07 11:31:49.132425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.574 [2024-10-07 11:31:49.132457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.574 [2024-10-07 11:31:49.132489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.574 [2024-10-07 11:31:49.132507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.574 [2024-10-07 11:31:49.132522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.574 [2024-10-07 11:31:49.132552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.574 [2024-10-07 11:31:49.136021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.574 [2024-10-07 11:31:49.136132] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.574 [2024-10-07 11:31:49.136164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.574 [2024-10-07 11:31:49.136181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.574 [2024-10-07 11:31:49.136448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.574 [2024-10-07 11:31:49.136608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.574 [2024-10-07 11:31:49.136643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.574 [2024-10-07 11:31:49.136660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.574 [2024-10-07 11:31:49.136776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.574 [2024-10-07 11:31:49.142920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.574 [2024-10-07 11:31:49.143035] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.574 [2024-10-07 11:31:49.143067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.574 [2024-10-07 11:31:49.143084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.574 [2024-10-07 11:31:49.143116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.574 [2024-10-07 11:31:49.143148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.574 [2024-10-07 11:31:49.143166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.574 [2024-10-07 11:31:49.143179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.574 [2024-10-07 11:31:49.143210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.574 [2024-10-07 11:31:49.146106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.574 [2024-10-07 11:31:49.146214] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.575 [2024-10-07 11:31:49.146244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.575 [2024-10-07 11:31:49.146263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.575 [2024-10-07 11:31:49.146309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.575 [2024-10-07 11:31:49.146361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.575 [2024-10-07 11:31:49.146385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.575 [2024-10-07 11:31:49.146399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.575 [2024-10-07 11:31:49.146429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.575 [2024-10-07 11:31:49.153432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.575 [2024-10-07 11:31:49.153544] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.575 [2024-10-07 11:31:49.153576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.575 [2024-10-07 11:31:49.153594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.575 [2024-10-07 11:31:49.153625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.575 [2024-10-07 11:31:49.153657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.575 [2024-10-07 11:31:49.153675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.575 [2024-10-07 11:31:49.153689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.575 [2024-10-07 11:31:49.153719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.575 [2024-10-07 11:31:49.156760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.575 [2024-10-07 11:31:49.156872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.575 [2024-10-07 11:31:49.156920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.575 [2024-10-07 11:31:49.156939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.575 [2024-10-07 11:31:49.156971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.575 [2024-10-07 11:31:49.157004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.575 [2024-10-07 11:31:49.157022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.575 [2024-10-07 11:31:49.157036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.575 [2024-10-07 11:31:49.157066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.575 [2024-10-07 11:31:49.163766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.575 [2024-10-07 11:31:49.163880] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.575 [2024-10-07 11:31:49.163911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.575 [2024-10-07 11:31:49.163929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.575 [2024-10-07 11:31:49.164180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.575 [2024-10-07 11:31:49.164341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.575 [2024-10-07 11:31:49.164377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.575 [2024-10-07 11:31:49.164395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.575 [2024-10-07 11:31:49.164503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.575 [2024-10-07 11:31:49.167280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.575 [2024-10-07 11:31:49.167401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.575 [2024-10-07 11:31:49.167433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.575 [2024-10-07 11:31:49.167450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.575 [2024-10-07 11:31:49.167482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.575 [2024-10-07 11:31:49.167514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.575 [2024-10-07 11:31:49.167532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.575 [2024-10-07 11:31:49.167546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.575 [2024-10-07 11:31:49.167576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.575 [2024-10-07 11:31:49.173859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.575 [2024-10-07 11:31:49.173978] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.575 [2024-10-07 11:31:49.174010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.575 [2024-10-07 11:31:49.174028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.575 [2024-10-07 11:31:49.174060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.575 [2024-10-07 11:31:49.174115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.575 [2024-10-07 11:31:49.174143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.575 [2024-10-07 11:31:49.174161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.575 [2024-10-07 11:31:49.174963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.575 [2024-10-07 11:31:49.177583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.575 [2024-10-07 11:31:49.177696] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.575 [2024-10-07 11:31:49.177728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.575 [2024-10-07 11:31:49.177745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.575 [2024-10-07 11:31:49.177997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.575 [2024-10-07 11:31:49.178144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.575 [2024-10-07 11:31:49.178180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.575 [2024-10-07 11:31:49.178197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.575 [2024-10-07 11:31:49.178335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.575 [2024-10-07 11:31:49.184476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.575 [2024-10-07 11:31:49.184589] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.575 [2024-10-07 11:31:49.184622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.575 [2024-10-07 11:31:49.184639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.575 [2024-10-07 11:31:49.184672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.575 [2024-10-07 11:31:49.184704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.575 [2024-10-07 11:31:49.184722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.575 [2024-10-07 11:31:49.184736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.575 [2024-10-07 11:31:49.184767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.575 [2024-10-07 11:31:49.187671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.575 [2024-10-07 11:31:49.187780] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.575 [2024-10-07 11:31:49.187810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.575 [2024-10-07 11:31:49.187827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.575 [2024-10-07 11:31:49.187858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.575 [2024-10-07 11:31:49.187889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.575 [2024-10-07 11:31:49.187908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.575 [2024-10-07 11:31:49.187929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.575 [2024-10-07 11:31:49.187958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.575 [2024-10-07 11:31:49.194967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.575 [2024-10-07 11:31:49.195080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.575 [2024-10-07 11:31:49.195112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.575 [2024-10-07 11:31:49.195129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.575 [2024-10-07 11:31:49.195161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.575 [2024-10-07 11:31:49.195193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.575 [2024-10-07 11:31:49.195212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.575 [2024-10-07 11:31:49.195226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.575 [2024-10-07 11:31:49.195256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.575 [2024-10-07 11:31:49.198268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.575 [2024-10-07 11:31:49.198403] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.575 [2024-10-07 11:31:49.198435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.575 [2024-10-07 11:31:49.198453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.575 [2024-10-07 11:31:49.198485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.575 [2024-10-07 11:31:49.198517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.575 [2024-10-07 11:31:49.198535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.575 [2024-10-07 11:31:49.198550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.576 [2024-10-07 11:31:49.198580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.576 [2024-10-07 11:31:49.205269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.576 [2024-10-07 11:31:49.205395] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.576 [2024-10-07 11:31:49.205427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.576 [2024-10-07 11:31:49.205444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.576 [2024-10-07 11:31:49.205711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.576 [2024-10-07 11:31:49.205862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.576 [2024-10-07 11:31:49.205897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.576 [2024-10-07 11:31:49.205914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.576 [2024-10-07 11:31:49.206021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.576 [2024-10-07 11:31:49.208826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.576 [2024-10-07 11:31:49.208939] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.576 [2024-10-07 11:31:49.208970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.576 [2024-10-07 11:31:49.209008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.576 [2024-10-07 11:31:49.209042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.576 [2024-10-07 11:31:49.209074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.576 [2024-10-07 11:31:49.209092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.576 [2024-10-07 11:31:49.209106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.576 [2024-10-07 11:31:49.209136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.576 [2024-10-07 11:31:49.215376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.576 [2024-10-07 11:31:49.215488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.576 [2024-10-07 11:31:49.215519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.576 [2024-10-07 11:31:49.215537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.576 [2024-10-07 11:31:49.215568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.576 [2024-10-07 11:31:49.215600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.576 [2024-10-07 11:31:49.215618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.576 [2024-10-07 11:31:49.215632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.576 [2024-10-07 11:31:49.215663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.576 [2024-10-07 11:31:49.219106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.576 [2024-10-07 11:31:49.219220] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.576 [2024-10-07 11:31:49.219251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.576 [2024-10-07 11:31:49.219269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.576 [2024-10-07 11:31:49.219300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.576 [2024-10-07 11:31:49.219568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.576 [2024-10-07 11:31:49.219595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.576 [2024-10-07 11:31:49.219610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.576 [2024-10-07 11:31:49.219748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.576 [2024-10-07 11:31:49.226163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.576 [2024-10-07 11:31:49.226276] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.576 [2024-10-07 11:31:49.226335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.576 [2024-10-07 11:31:49.226356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.576 [2024-10-07 11:31:49.226390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.576 [2024-10-07 11:31:49.226423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.576 [2024-10-07 11:31:49.226441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.576 [2024-10-07 11:31:49.226471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.576 [2024-10-07 11:31:49.226504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.576 [2024-10-07 11:31:49.229326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.576 [2024-10-07 11:31:49.229436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.576 [2024-10-07 11:31:49.229467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.576 [2024-10-07 11:31:49.229485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.576 [2024-10-07 11:31:49.229516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.576 [2024-10-07 11:31:49.229548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.576 [2024-10-07 11:31:49.229566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.576 [2024-10-07 11:31:49.229581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.576 [2024-10-07 11:31:49.229616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.576 [2024-10-07 11:31:49.236751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.576 [2024-10-07 11:31:49.236864] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.576 [2024-10-07 11:31:49.236896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.576 [2024-10-07 11:31:49.236913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.576 [2024-10-07 11:31:49.236945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.576 [2024-10-07 11:31:49.236977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.576 [2024-10-07 11:31:49.236995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.576 [2024-10-07 11:31:49.237009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.576 [2024-10-07 11:31:49.237039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.576 [2024-10-07 11:31:49.240063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.576 [2024-10-07 11:31:49.240179] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.576 [2024-10-07 11:31:49.240209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.576 [2024-10-07 11:31:49.240227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.576 [2024-10-07 11:31:49.240258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.576 [2024-10-07 11:31:49.240290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.576 [2024-10-07 11:31:49.240308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.576 [2024-10-07 11:31:49.240337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.576 [2024-10-07 11:31:49.240369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.576 [2024-10-07 11:31:49.247057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.576 [2024-10-07 11:31:49.247191] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.576 [2024-10-07 11:31:49.247222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.576 [2024-10-07 11:31:49.247240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.576 [2024-10-07 11:31:49.247523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.576 [2024-10-07 11:31:49.247674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.576 [2024-10-07 11:31:49.247709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.576 [2024-10-07 11:31:49.247727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.576 [2024-10-07 11:31:49.247834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.576 [2024-10-07 11:31:49.250632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.576 [2024-10-07 11:31:49.250743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.576 [2024-10-07 11:31:49.250774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.576 [2024-10-07 11:31:49.250792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.576 [2024-10-07 11:31:49.250824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.576 [2024-10-07 11:31:49.250856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.576 [2024-10-07 11:31:49.250874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.576 [2024-10-07 11:31:49.250888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.576 [2024-10-07 11:31:49.250918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.576 [2024-10-07 11:31:49.257184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.576 [2024-10-07 11:31:49.257351] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.576 [2024-10-07 11:31:49.257384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.576 [2024-10-07 11:31:49.257402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.576 [2024-10-07 11:31:49.257436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.577 [2024-10-07 11:31:49.257469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.577 [2024-10-07 11:31:49.257487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.577 [2024-10-07 11:31:49.257502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.577 [2024-10-07 11:31:49.257534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.577 [2024-10-07 11:31:49.261065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.577 [2024-10-07 11:31:49.261177] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.577 [2024-10-07 11:31:49.261208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.577 [2024-10-07 11:31:49.261225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.577 [2024-10-07 11:31:49.261523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.577 [2024-10-07 11:31:49.261660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.577 [2024-10-07 11:31:49.261694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.577 [2024-10-07 11:31:49.261712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.577 [2024-10-07 11:31:49.261821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.577 [2024-10-07 11:31:49.268028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.577 [2024-10-07 11:31:49.268141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.577 [2024-10-07 11:31:49.268173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.577 [2024-10-07 11:31:49.268190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.577 [2024-10-07 11:31:49.268222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.577 [2024-10-07 11:31:49.268254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.577 [2024-10-07 11:31:49.268272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.577 [2024-10-07 11:31:49.268286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.577 [2024-10-07 11:31:49.268330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.577 [2024-10-07 11:31:49.271179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.577 [2024-10-07 11:31:49.271292] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.577 [2024-10-07 11:31:49.271335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.577 [2024-10-07 11:31:49.271355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.577 [2024-10-07 11:31:49.271388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.577 [2024-10-07 11:31:49.271420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.577 [2024-10-07 11:31:49.271438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.577 [2024-10-07 11:31:49.271452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.577 [2024-10-07 11:31:49.271482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.577 [2024-10-07 11:31:49.278631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.577 [2024-10-07 11:31:49.278743] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.577 [2024-10-07 11:31:49.278775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.577 [2024-10-07 11:31:49.278793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.577 [2024-10-07 11:31:49.278825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.577 [2024-10-07 11:31:49.278857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.577 [2024-10-07 11:31:49.278875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.577 [2024-10-07 11:31:49.278905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.577 [2024-10-07 11:31:49.278967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.577 [2024-10-07 11:31:49.281929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.577 [2024-10-07 11:31:49.282040] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.577 [2024-10-07 11:31:49.282071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.577 [2024-10-07 11:31:49.282089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.577 [2024-10-07 11:31:49.282122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.577 [2024-10-07 11:31:49.282153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.577 [2024-10-07 11:31:49.282172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.577 [2024-10-07 11:31:49.282186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.577 [2024-10-07 11:31:49.282216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.577 [2024-10-07 11:31:49.288930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.577 [2024-10-07 11:31:49.289080] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.577 [2024-10-07 11:31:49.289124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.577 [2024-10-07 11:31:49.289144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.577 [2024-10-07 11:31:49.289437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.577 [2024-10-07 11:31:49.289588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.577 [2024-10-07 11:31:49.289621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.577 [2024-10-07 11:31:49.289639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.577 [2024-10-07 11:31:49.289750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.577 [2024-10-07 11:31:49.292631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.577 [2024-10-07 11:31:49.292754] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.577 [2024-10-07 11:31:49.292801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.577 [2024-10-07 11:31:49.292831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.577 [2024-10-07 11:31:49.292875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.577 [2024-10-07 11:31:49.292920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.577 [2024-10-07 11:31:49.292938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.577 [2024-10-07 11:31:49.292953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.577 [2024-10-07 11:31:49.292983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.577 [2024-10-07 11:31:49.299114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.577 [2024-10-07 11:31:49.299229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.577 [2024-10-07 11:31:49.299288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.577 [2024-10-07 11:31:49.299308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.577 [2024-10-07 11:31:49.299360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.577 [2024-10-07 11:31:49.299394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.577 [2024-10-07 11:31:49.299412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.577 [2024-10-07 11:31:49.299427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.577 [2024-10-07 11:31:49.299459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.577 [2024-10-07 11:31:49.302940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.577 [2024-10-07 11:31:49.303053] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.577 [2024-10-07 11:31:49.303084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.577 [2024-10-07 11:31:49.303101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.577 [2024-10-07 11:31:49.303387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.577 [2024-10-07 11:31:49.303540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.577 [2024-10-07 11:31:49.303565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.577 [2024-10-07 11:31:49.303579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.577 [2024-10-07 11:31:49.303686] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.577 [2024-10-07 11:31:49.309898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.577 [2024-10-07 11:31:49.310015] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.578 [2024-10-07 11:31:49.310046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.578 [2024-10-07 11:31:49.310064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.578 [2024-10-07 11:31:49.310096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.578 [2024-10-07 11:31:49.310127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.578 [2024-10-07 11:31:49.310145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.578 [2024-10-07 11:31:49.310159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.578 [2024-10-07 11:31:49.310189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.578 [2024-10-07 11:31:49.313055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.578 [2024-10-07 11:31:49.313166] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.578 [2024-10-07 11:31:49.313197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.578 [2024-10-07 11:31:49.313215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.578 [2024-10-07 11:31:49.313247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.578 [2024-10-07 11:31:49.313298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.578 [2024-10-07 11:31:49.313331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.578 [2024-10-07 11:31:49.313349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.578 [2024-10-07 11:31:49.313380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.578 [2024-10-07 11:31:49.320494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.578 [2024-10-07 11:31:49.320609] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.578 [2024-10-07 11:31:49.320642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.578 [2024-10-07 11:31:49.320659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.578 [2024-10-07 11:31:49.320691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.578 [2024-10-07 11:31:49.320723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.578 [2024-10-07 11:31:49.320741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.578 [2024-10-07 11:31:49.320755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.578 [2024-10-07 11:31:49.320785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.578 [2024-10-07 11:31:49.323820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.578 [2024-10-07 11:31:49.323933] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.578 [2024-10-07 11:31:49.323964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.578 [2024-10-07 11:31:49.323981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.578 [2024-10-07 11:31:49.324013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.578 [2024-10-07 11:31:49.324045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.578 [2024-10-07 11:31:49.324063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.578 [2024-10-07 11:31:49.324078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.578 [2024-10-07 11:31:49.324107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.578 [2024-10-07 11:31:49.330765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.578 [2024-10-07 11:31:49.330878] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.578 [2024-10-07 11:31:49.330910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.578 [2024-10-07 11:31:49.330928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.578 [2024-10-07 11:31:49.331180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.578 [2024-10-07 11:31:49.331341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.578 [2024-10-07 11:31:49.331383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.578 [2024-10-07 11:31:49.331399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.578 [2024-10-07 11:31:49.331508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.578 [2024-10-07 11:31:49.334436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.578 [2024-10-07 11:31:49.334550] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.578 [2024-10-07 11:31:49.334582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.578 [2024-10-07 11:31:49.334599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.578 [2024-10-07 11:31:49.334631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.578 [2024-10-07 11:31:49.334663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.578 [2024-10-07 11:31:49.334681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.578 [2024-10-07 11:31:49.334695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.578 [2024-10-07 11:31:49.334725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.578 [2024-10-07 11:31:49.340896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.578 [2024-10-07 11:31:49.341022] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.578 [2024-10-07 11:31:49.341054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.578 [2024-10-07 11:31:49.341071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.578 [2024-10-07 11:31:49.341103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.578 [2024-10-07 11:31:49.341136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.578 [2024-10-07 11:31:49.341153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.578 [2024-10-07 11:31:49.341168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.578 [2024-10-07 11:31:49.341199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.578 [2024-10-07 11:31:49.344785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.578 [2024-10-07 11:31:49.344907] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.578 [2024-10-07 11:31:49.344938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.578 [2024-10-07 11:31:49.344956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.578 [2024-10-07 11:31:49.345210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.578 [2024-10-07 11:31:49.345370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.578 [2024-10-07 11:31:49.345401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.578 [2024-10-07 11:31:49.345418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.578 [2024-10-07 11:31:49.345527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.578 [2024-10-07 11:31:49.351725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.578 [2024-10-07 11:31:49.351842] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.578 [2024-10-07 11:31:49.351874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.578 [2024-10-07 11:31:49.351918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.578 [2024-10-07 11:31:49.351952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.578 [2024-10-07 11:31:49.351984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.578 [2024-10-07 11:31:49.352002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.578 [2024-10-07 11:31:49.352016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.578 [2024-10-07 11:31:49.352047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.578 [2024-10-07 11:31:49.356854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.578 [2024-10-07 11:31:49.357156] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.578 [2024-10-07 11:31:49.357225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.578 [2024-10-07 11:31:49.357262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.578 [2024-10-07 11:31:49.358713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.578 [2024-10-07 11:31:49.359683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.578 [2024-10-07 11:31:49.359735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.578 [2024-10-07 11:31:49.359754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.578 [2024-10-07 11:31:49.359935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.578 [2024-10-07 11:31:49.362402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.579 [2024-10-07 11:31:49.362518] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.579 [2024-10-07 11:31:49.362550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.579 [2024-10-07 11:31:49.362568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.579 [2024-10-07 11:31:49.362600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.579 [2024-10-07 11:31:49.362632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.579 [2024-10-07 11:31:49.362650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.579 [2024-10-07 11:31:49.362665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.579 [2024-10-07 11:31:49.362696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.579 [2024-10-07 11:31:49.369211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.579 [2024-10-07 11:31:49.370362] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.579 [2024-10-07 11:31:49.370437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.579 [2024-10-07 11:31:49.370475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.579 [2024-10-07 11:31:49.370707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.579 [2024-10-07 11:31:49.372490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.579 [2024-10-07 11:31:49.372586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.579 [2024-10-07 11:31:49.372619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.579 [2024-10-07 11:31:49.373872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.579 [2024-10-07 11:31:49.374140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.579 [2024-10-07 11:31:49.374423] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.579 [2024-10-07 11:31:49.374482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.579 [2024-10-07 11:31:49.374524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.579 [2024-10-07 11:31:49.375984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.579 [2024-10-07 11:31:49.376819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.579 [2024-10-07 11:31:49.376861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.579 [2024-10-07 11:31:49.376886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.579 [2024-10-07 11:31:49.377002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.579 [2024-10-07 11:31:49.379927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.579 [2024-10-07 11:31:49.380046] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.579 [2024-10-07 11:31:49.380078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.579 [2024-10-07 11:31:49.380096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.579 [2024-10-07 11:31:49.380128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.579 [2024-10-07 11:31:49.380160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.579 [2024-10-07 11:31:49.380179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.579 [2024-10-07 11:31:49.380193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.579 [2024-10-07 11:31:49.380223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.579 [2024-10-07 11:31:49.384516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.579 [2024-10-07 11:31:49.384633] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.579 [2024-10-07 11:31:49.384665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.579 [2024-10-07 11:31:49.384683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.579 [2024-10-07 11:31:49.384716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.579 [2024-10-07 11:31:49.384748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.579 [2024-10-07 11:31:49.384766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.579 [2024-10-07 11:31:49.384780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.579 [2024-10-07 11:31:49.384810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.579 [2024-10-07 11:31:49.390644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.579 [2024-10-07 11:31:49.390788] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.579 [2024-10-07 11:31:49.390821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.579 [2024-10-07 11:31:49.390839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.579 [2024-10-07 11:31:49.390871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.579 [2024-10-07 11:31:49.390903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.579 [2024-10-07 11:31:49.390921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.579 [2024-10-07 11:31:49.390936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.579 [2024-10-07 11:31:49.390965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.579 [2024-10-07 11:31:49.394611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.579 [2024-10-07 11:31:49.394725] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.579 [2024-10-07 11:31:49.394757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.579 [2024-10-07 11:31:49.394774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.579 [2024-10-07 11:31:49.394806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.579 [2024-10-07 11:31:49.394845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.579 [2024-10-07 11:31:49.394863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.579 [2024-10-07 11:31:49.394877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.579 [2024-10-07 11:31:49.394907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.579 [2024-10-07 11:31:49.401020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.579 [2024-10-07 11:31:49.401136] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.579 [2024-10-07 11:31:49.401169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.579 [2024-10-07 11:31:49.401186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.579 [2024-10-07 11:31:49.401453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.579 [2024-10-07 11:31:49.401603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.579 [2024-10-07 11:31:49.401638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.579 [2024-10-07 11:31:49.401656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.579 [2024-10-07 11:31:49.401763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.579 [2024-10-07 11:31:49.404701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.579 [2024-10-07 11:31:49.404813] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.579 [2024-10-07 11:31:49.404844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.579 [2024-10-07 11:31:49.404862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.579 [2024-10-07 11:31:49.404912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.579 [2024-10-07 11:31:49.404946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.579 [2024-10-07 11:31:49.404965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.579 [2024-10-07 11:31:49.404979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.579 [2024-10-07 11:31:49.405010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.579 [2024-10-07 11:31:49.411127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.579 [2024-10-07 11:31:49.411245] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.579 [2024-10-07 11:31:49.411278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.579 [2024-10-07 11:31:49.411295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.579 [2024-10-07 11:31:49.411341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.579 [2024-10-07 11:31:49.411376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.579 [2024-10-07 11:31:49.411395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.579 [2024-10-07 11:31:49.411409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.579 [2024-10-07 11:31:49.411440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.579 [2024-10-07 11:31:49.415113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.579 [2024-10-07 11:31:49.415229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.580 [2024-10-07 11:31:49.415261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.580 [2024-10-07 11:31:49.415278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.580 [2024-10-07 11:31:49.415544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.580 [2024-10-07 11:31:49.415705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.580 [2024-10-07 11:31:49.415733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.580 [2024-10-07 11:31:49.415748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.580 [2024-10-07 11:31:49.415855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.580 [2024-10-07 11:31:49.422147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.580 [2024-10-07 11:31:49.422262] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.580 [2024-10-07 11:31:49.422309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.580 [2024-10-07 11:31:49.422368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.580 [2024-10-07 11:31:49.422404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.580 [2024-10-07 11:31:49.422437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.580 [2024-10-07 11:31:49.422455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.580 [2024-10-07 11:31:49.422499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.580 [2024-10-07 11:31:49.422533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.580 [2024-10-07 11:31:49.425368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.580 [2024-10-07 11:31:49.425472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.580 [2024-10-07 11:31:49.425503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.580 [2024-10-07 11:31:49.425521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.580 [2024-10-07 11:31:49.425553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.580 [2024-10-07 11:31:49.425585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.580 [2024-10-07 11:31:49.425603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.580 [2024-10-07 11:31:49.425618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.580 [2024-10-07 11:31:49.425647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.580 [2024-10-07 11:31:49.432932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.580 [2024-10-07 11:31:49.433077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.580 [2024-10-07 11:31:49.433109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.580 [2024-10-07 11:31:49.433127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.580 [2024-10-07 11:31:49.433160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.580 [2024-10-07 11:31:49.433194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.580 [2024-10-07 11:31:49.433213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.580 [2024-10-07 11:31:49.433228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.580 [2024-10-07 11:31:49.433258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.580 [2024-10-07 11:31:49.436309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.580 [2024-10-07 11:31:49.436441] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.580 [2024-10-07 11:31:49.436473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.580 [2024-10-07 11:31:49.436491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.580 [2024-10-07 11:31:49.436523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.580 [2024-10-07 11:31:49.436555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.580 [2024-10-07 11:31:49.436574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.580 [2024-10-07 11:31:49.436589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.580 [2024-10-07 11:31:49.436620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.580 [2024-10-07 11:31:49.443349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.580 [2024-10-07 11:31:49.443456] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.580 [2024-10-07 11:31:49.443510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.580 [2024-10-07 11:31:49.443529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.580 [2024-10-07 11:31:49.443782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.580 [2024-10-07 11:31:49.443944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.580 [2024-10-07 11:31:49.443976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.580 [2024-10-07 11:31:49.443993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.580 [2024-10-07 11:31:49.444100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.580 [2024-10-07 11:31:49.447001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.580 [2024-10-07 11:31:49.447119] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.580 [2024-10-07 11:31:49.447151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.580 [2024-10-07 11:31:49.447169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.580 [2024-10-07 11:31:49.447200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.580 [2024-10-07 11:31:49.447233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.580 [2024-10-07 11:31:49.447251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.580 [2024-10-07 11:31:49.447265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.580 [2024-10-07 11:31:49.447296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.580 [2024-10-07 11:31:49.453545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.580 [2024-10-07 11:31:49.453660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.580 [2024-10-07 11:31:49.453691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.580 [2024-10-07 11:31:49.453709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.580 [2024-10-07 11:31:49.453741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.580 [2024-10-07 11:31:49.453773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.580 [2024-10-07 11:31:49.453791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.580 [2024-10-07 11:31:49.453806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.580 [2024-10-07 11:31:49.453837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.580 [2024-10-07 11:31:49.457477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.580 [2024-10-07 11:31:49.457593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.580 [2024-10-07 11:31:49.457624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.580 [2024-10-07 11:31:49.457642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.580 [2024-10-07 11:31:49.457897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.580 [2024-10-07 11:31:49.458067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.580 [2024-10-07 11:31:49.458093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.580 [2024-10-07 11:31:49.458108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.580 [2024-10-07 11:31:49.458216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.580 [2024-10-07 11:31:49.464612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.580 [2024-10-07 11:31:49.464746] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.580 [2024-10-07 11:31:49.464779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.580 [2024-10-07 11:31:49.464797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.580 [2024-10-07 11:31:49.464830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.580 [2024-10-07 11:31:49.464862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.580 [2024-10-07 11:31:49.464880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.580 [2024-10-07 11:31:49.464895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.580 [2024-10-07 11:31:49.464925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.580 [2024-10-07 11:31:49.467810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.580 [2024-10-07 11:31:49.467925] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.580 [2024-10-07 11:31:49.467964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.580 [2024-10-07 11:31:49.467981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.580 [2024-10-07 11:31:49.468014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.580 [2024-10-07 11:31:49.468046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.580 [2024-10-07 11:31:49.468064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.580 [2024-10-07 11:31:49.468079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.580 [2024-10-07 11:31:49.468109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.581 [2024-10-07 11:31:49.475280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.581 [2024-10-07 11:31:49.475407] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.581 [2024-10-07 11:31:49.475439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.581 [2024-10-07 11:31:49.475457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.581 [2024-10-07 11:31:49.475489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.581 [2024-10-07 11:31:49.475521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.581 [2024-10-07 11:31:49.475540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.581 [2024-10-07 11:31:49.475554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.581 [2024-10-07 11:31:49.475584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.581 [2024-10-07 11:31:49.478762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.581 [2024-10-07 11:31:49.478879] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.581 [2024-10-07 11:31:49.478914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.581 [2024-10-07 11:31:49.478931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.581 [2024-10-07 11:31:49.478964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.581 [2024-10-07 11:31:49.478997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.581 [2024-10-07 11:31:49.479014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.581 [2024-10-07 11:31:49.479028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.581 [2024-10-07 11:31:49.479058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.581 [2024-10-07 11:31:49.485787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.581 [2024-10-07 11:31:49.485902] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.581 [2024-10-07 11:31:49.485944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.581 [2024-10-07 11:31:49.485961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.581 [2024-10-07 11:31:49.486213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.581 [2024-10-07 11:31:49.486386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.581 [2024-10-07 11:31:49.486413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.581 [2024-10-07 11:31:49.486429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.581 [2024-10-07 11:31:49.486545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.581 [2024-10-07 11:31:49.489448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.581 [2024-10-07 11:31:49.489561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.581 [2024-10-07 11:31:49.489592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.581 [2024-10-07 11:31:49.489609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.581 [2024-10-07 11:31:49.489641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.581 [2024-10-07 11:31:49.489673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.581 [2024-10-07 11:31:49.489691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.581 [2024-10-07 11:31:49.489706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.581 [2024-10-07 11:31:49.489736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.581 [2024-10-07 11:31:49.495964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.581 [2024-10-07 11:31:49.496076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.581 [2024-10-07 11:31:49.496108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.581 [2024-10-07 11:31:49.496143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.581 [2024-10-07 11:31:49.496177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.581 [2024-10-07 11:31:49.496209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.581 [2024-10-07 11:31:49.496228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.581 [2024-10-07 11:31:49.496243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.581 [2024-10-07 11:31:49.496273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.581 8945.31 IOPS, 34.94 MiB/s [2024-10-07T11:31:53.104Z] [2024-10-07 11:31:49.503832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.581 [2024-10-07 11:31:49.504029] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.581 [2024-10-07 11:31:49.504068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.581 [2024-10-07 11:31:49.504088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.581 [2024-10-07 11:31:49.504121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.581 [2024-10-07 11:31:49.504153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.581 [2024-10-07 11:31:49.504172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.581 [2024-10-07 11:31:49.504192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.581 [2024-10-07 11:31:49.504224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.581 [2024-10-07 11:31:49.506930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.581 [2024-10-07 11:31:49.507042] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.581 [2024-10-07 11:31:49.507073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.581 [2024-10-07 11:31:49.507090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.581 [2024-10-07 11:31:49.507122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.581 [2024-10-07 11:31:49.507153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.581 [2024-10-07 11:31:49.507172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.581 [2024-10-07 11:31:49.507186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.581 [2024-10-07 11:31:49.507215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.581 [2024-10-07 11:31:49.514173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.581 [2024-10-07 11:31:49.514370] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.581 [2024-10-07 11:31:49.514405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.581 [2024-10-07 11:31:49.514423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.581 [2024-10-07 11:31:49.514686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.581 [2024-10-07 11:31:49.514854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.581 [2024-10-07 11:31:49.514916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.581 [2024-10-07 11:31:49.514935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.581 [2024-10-07 11:31:49.515054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.581 [2024-10-07 11:31:49.517815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.581 [2024-10-07 11:31:49.517926] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.581 [2024-10-07 11:31:49.517960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.581 [2024-10-07 11:31:49.517977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.581 [2024-10-07 11:31:49.518024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.581 [2024-10-07 11:31:49.518060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.581 [2024-10-07 11:31:49.518078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.581 [2024-10-07 11:31:49.518092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.581 [2024-10-07 11:31:49.518139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.581 [2024-10-07 11:31:49.524524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.581 [2024-10-07 11:31:49.524647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.581 [2024-10-07 11:31:49.524680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.581 [2024-10-07 11:31:49.524698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.581 [2024-10-07 11:31:49.524730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.581 [2024-10-07 11:31:49.524763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.581 [2024-10-07 11:31:49.524780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.581 [2024-10-07 11:31:49.524795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.581 [2024-10-07 11:31:49.524826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.581 [2024-10-07 11:31:49.528512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.581 [2024-10-07 11:31:49.528627] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.581 [2024-10-07 11:31:49.528658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.581 [2024-10-07 11:31:49.528676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.581 [2024-10-07 11:31:49.528927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.581 [2024-10-07 11:31:49.529080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.582 [2024-10-07 11:31:49.529126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.582 [2024-10-07 11:31:49.529145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.582 [2024-10-07 11:31:49.529257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.582 [2024-10-07 11:31:49.535515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.582 [2024-10-07 11:31:49.535651] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.582 [2024-10-07 11:31:49.535683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.582 [2024-10-07 11:31:49.535700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.582 [2024-10-07 11:31:49.535732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.582 [2024-10-07 11:31:49.535764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.582 [2024-10-07 11:31:49.535782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.582 [2024-10-07 11:31:49.535797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.582 [2024-10-07 11:31:49.535828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.582 [2024-10-07 11:31:49.538654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.582 [2024-10-07 11:31:49.538766] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.582 [2024-10-07 11:31:49.538797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.582 [2024-10-07 11:31:49.538815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.582 [2024-10-07 11:31:49.538847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.582 [2024-10-07 11:31:49.538879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.582 [2024-10-07 11:31:49.538897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.582 [2024-10-07 11:31:49.538912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.582 [2024-10-07 11:31:49.538941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.582 [2024-10-07 11:31:49.546135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.582 [2024-10-07 11:31:49.546274] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.582 [2024-10-07 11:31:49.546334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.582 [2024-10-07 11:31:49.546356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.582 [2024-10-07 11:31:49.546391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.582 [2024-10-07 11:31:49.546425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.582 [2024-10-07 11:31:49.546443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.582 [2024-10-07 11:31:49.546458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.582 [2024-10-07 11:31:49.546489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.582 [2024-10-07 11:31:49.549590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.582 [2024-10-07 11:31:49.549704] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.582 [2024-10-07 11:31:49.549736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.582 [2024-10-07 11:31:49.549754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.582 [2024-10-07 11:31:49.549809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.582 [2024-10-07 11:31:49.549842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.582 [2024-10-07 11:31:49.549861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.582 [2024-10-07 11:31:49.549875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.582 [2024-10-07 11:31:49.549919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.582 [2024-10-07 11:31:49.556650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.582 [2024-10-07 11:31:49.556785] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.582 [2024-10-07 11:31:49.556818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.582 [2024-10-07 11:31:49.556836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.582 [2024-10-07 11:31:49.557088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.582 [2024-10-07 11:31:49.557247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.582 [2024-10-07 11:31:49.557282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.582 [2024-10-07 11:31:49.557305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.582 [2024-10-07 11:31:49.557430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.582 [2024-10-07 11:31:49.560257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.582 [2024-10-07 11:31:49.560385] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.582 [2024-10-07 11:31:49.560417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.582 [2024-10-07 11:31:49.560434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.582 [2024-10-07 11:31:49.560466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.582 [2024-10-07 11:31:49.560499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.582 [2024-10-07 11:31:49.560517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.582 [2024-10-07 11:31:49.560531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.582 [2024-10-07 11:31:49.560561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.582 [2024-10-07 11:31:49.566744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.582 [2024-10-07 11:31:49.566857] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.582 [2024-10-07 11:31:49.566889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.582 [2024-10-07 11:31:49.566907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.582 [2024-10-07 11:31:49.566939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.582 [2024-10-07 11:31:49.566970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.582 [2024-10-07 11:31:49.566989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.582 [2024-10-07 11:31:49.567019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.582 [2024-10-07 11:31:49.567053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.582 [2024-10-07 11:31:49.570633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.582 [2024-10-07 11:31:49.570752] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.582 [2024-10-07 11:31:49.570785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.582 [2024-10-07 11:31:49.570802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.582 [2024-10-07 11:31:49.571054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.582 [2024-10-07 11:31:49.571208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.582 [2024-10-07 11:31:49.571245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.582 [2024-10-07 11:31:49.571263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.582 [2024-10-07 11:31:49.571384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.582 [2024-10-07 11:31:49.577532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.582 [2024-10-07 11:31:49.577647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.582 [2024-10-07 11:31:49.577679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.582 [2024-10-07 11:31:49.577696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.582 [2024-10-07 11:31:49.577728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.582 [2024-10-07 11:31:49.577760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.582 [2024-10-07 11:31:49.577778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.582 [2024-10-07 11:31:49.577793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.582 [2024-10-07 11:31:49.577824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.582 [2024-10-07 11:31:49.580727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.582 [2024-10-07 11:31:49.580839] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.583 [2024-10-07 11:31:49.580870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.583 [2024-10-07 11:31:49.580888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.583 [2024-10-07 11:31:49.580920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.583 [2024-10-07 11:31:49.580952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.583 [2024-10-07 11:31:49.580971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.583 [2024-10-07 11:31:49.580987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.583 [2024-10-07 11:31:49.581017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.583 [2024-10-07 11:31:49.588104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.583 [2024-10-07 11:31:49.588219] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.583 [2024-10-07 11:31:49.588267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.583 [2024-10-07 11:31:49.588286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.583 [2024-10-07 11:31:49.588334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.583 [2024-10-07 11:31:49.588369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.583 [2024-10-07 11:31:49.588388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.583 [2024-10-07 11:31:49.588402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.583 [2024-10-07 11:31:49.588433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.583 [2024-10-07 11:31:49.591494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.583 [2024-10-07 11:31:49.591605] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.583 [2024-10-07 11:31:49.591637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.583 [2024-10-07 11:31:49.591654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.583 [2024-10-07 11:31:49.591687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.583 [2024-10-07 11:31:49.591719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.583 [2024-10-07 11:31:49.591737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.583 [2024-10-07 11:31:49.591752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.583 [2024-10-07 11:31:49.591781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.583 [2024-10-07 11:31:49.598549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.583 [2024-10-07 11:31:49.598690] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.583 [2024-10-07 11:31:49.598723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.583 [2024-10-07 11:31:49.598742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.583 [2024-10-07 11:31:49.598999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.583 [2024-10-07 11:31:49.599154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.583 [2024-10-07 11:31:49.599190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.583 [2024-10-07 11:31:49.599209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.583 [2024-10-07 11:31:49.599341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.583 [2024-10-07 11:31:49.602200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.583 [2024-10-07 11:31:49.602339] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.583 [2024-10-07 11:31:49.602372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.583 [2024-10-07 11:31:49.602398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.583 [2024-10-07 11:31:49.602432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.583 [2024-10-07 11:31:49.602487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.583 [2024-10-07 11:31:49.602507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.583 [2024-10-07 11:31:49.602522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.583 [2024-10-07 11:31:49.602552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.583 [2024-10-07 11:31:49.608711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.583 [2024-10-07 11:31:49.608829] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.583 [2024-10-07 11:31:49.608861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.583 [2024-10-07 11:31:49.608878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.583 [2024-10-07 11:31:49.608911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.583 [2024-10-07 11:31:49.608943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.583 [2024-10-07 11:31:49.608961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.583 [2024-10-07 11:31:49.608976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.583 [2024-10-07 11:31:49.609007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.583 [2024-10-07 11:31:49.612641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.583 [2024-10-07 11:31:49.612746] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.583 [2024-10-07 11:31:49.612777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.583 [2024-10-07 11:31:49.612795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.583 [2024-10-07 11:31:49.613046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.583 [2024-10-07 11:31:49.613200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.583 [2024-10-07 11:31:49.613226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.583 [2024-10-07 11:31:49.613241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.583 [2024-10-07 11:31:49.613362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.583 [2024-10-07 11:31:49.619527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.583 [2024-10-07 11:31:49.619642] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.583 [2024-10-07 11:31:49.619680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.583 [2024-10-07 11:31:49.619697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.583 [2024-10-07 11:31:49.619729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.583 [2024-10-07 11:31:49.619761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.583 [2024-10-07 11:31:49.619779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.583 [2024-10-07 11:31:49.619793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.583 [2024-10-07 11:31:49.619844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.583 [2024-10-07 11:31:49.622726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.583 [2024-10-07 11:31:49.622837] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.583 [2024-10-07 11:31:49.622869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.583 [2024-10-07 11:31:49.622887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.583 [2024-10-07 11:31:49.622919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.583 [2024-10-07 11:31:49.622952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.583 [2024-10-07 11:31:49.622970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.583 [2024-10-07 11:31:49.622985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.583 [2024-10-07 11:31:49.623015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.583 [2024-10-07 11:31:49.630182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.583 [2024-10-07 11:31:49.630348] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.583 [2024-10-07 11:31:49.630384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.583 [2024-10-07 11:31:49.630407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.583 [2024-10-07 11:31:49.630443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.583 [2024-10-07 11:31:49.630477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.583 [2024-10-07 11:31:49.630495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.583 [2024-10-07 11:31:49.630510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.583 [2024-10-07 11:31:49.630541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.583 [2024-10-07 11:31:49.633663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.583 [2024-10-07 11:31:49.633775] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.583 [2024-10-07 11:31:49.633806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.583 [2024-10-07 11:31:49.633825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.583 [2024-10-07 11:31:49.633858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.583 [2024-10-07 11:31:49.633891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.584 [2024-10-07 11:31:49.633910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.584 [2024-10-07 11:31:49.633924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.584 [2024-10-07 11:31:49.633955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.584 [2024-10-07 11:31:49.640782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.584 [2024-10-07 11:31:49.640894] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.584 [2024-10-07 11:31:49.640926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.584 [2024-10-07 11:31:49.640968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.584 [2024-10-07 11:31:49.641223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.584 [2024-10-07 11:31:49.641396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.584 [2024-10-07 11:31:49.641432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.584 [2024-10-07 11:31:49.641450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.584 [2024-10-07 11:31:49.641559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.584 [2024-10-07 11:31:49.644362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.584 [2024-10-07 11:31:49.644474] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.584 [2024-10-07 11:31:49.644505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.584 [2024-10-07 11:31:49.644522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.584 [2024-10-07 11:31:49.644554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.584 [2024-10-07 11:31:49.644587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.584 [2024-10-07 11:31:49.644605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.584 [2024-10-07 11:31:49.644619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.584 [2024-10-07 11:31:49.644649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.584 [2024-10-07 11:31:49.650874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.584 [2024-10-07 11:31:49.650984] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.584 [2024-10-07 11:31:49.651016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.584 [2024-10-07 11:31:49.651033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.584 [2024-10-07 11:31:49.651064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.584 [2024-10-07 11:31:49.651096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.584 [2024-10-07 11:31:49.651114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.584 [2024-10-07 11:31:49.651128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.584 [2024-10-07 11:31:49.651159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.584 [2024-10-07 11:31:49.654775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.584 [2024-10-07 11:31:49.654884] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.584 [2024-10-07 11:31:49.654915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.584 [2024-10-07 11:31:49.654932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.584 [2024-10-07 11:31:49.655183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.584 [2024-10-07 11:31:49.655359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.584 [2024-10-07 11:31:49.655411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.584 [2024-10-07 11:31:49.655429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.584 [2024-10-07 11:31:49.655538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.584 [2024-10-07 11:31:49.661653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.584 [2024-10-07 11:31:49.661766] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.584 [2024-10-07 11:31:49.661806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.584 [2024-10-07 11:31:49.661824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.584 [2024-10-07 11:31:49.661856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.584 [2024-10-07 11:31:49.661888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.584 [2024-10-07 11:31:49.661907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.584 [2024-10-07 11:31:49.661921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.584 [2024-10-07 11:31:49.661952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.584 [2024-10-07 11:31:49.664868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.584 [2024-10-07 11:31:49.664977] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.584 [2024-10-07 11:31:49.665008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.584 [2024-10-07 11:31:49.665025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.584 [2024-10-07 11:31:49.665056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.584 [2024-10-07 11:31:49.665088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.584 [2024-10-07 11:31:49.665106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.584 [2024-10-07 11:31:49.665121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.584 [2024-10-07 11:31:49.665150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.584 [2024-10-07 11:31:49.672217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.584 [2024-10-07 11:31:49.672344] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.584 [2024-10-07 11:31:49.672377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.584 [2024-10-07 11:31:49.672394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.584 [2024-10-07 11:31:49.672427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.584 [2024-10-07 11:31:49.672459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.584 [2024-10-07 11:31:49.672477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.584 [2024-10-07 11:31:49.672492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.584 [2024-10-07 11:31:49.672522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.584 [2024-10-07 11:31:49.675573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.584 [2024-10-07 11:31:49.675687] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.584 [2024-10-07 11:31:49.675719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.584 [2024-10-07 11:31:49.675736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.584 [2024-10-07 11:31:49.675768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.584 [2024-10-07 11:31:49.675801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.584 [2024-10-07 11:31:49.675820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.584 [2024-10-07 11:31:49.675834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.584 [2024-10-07 11:31:49.675864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.584 [2024-10-07 11:31:49.682619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.584 [2024-10-07 11:31:49.682734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.584 [2024-10-07 11:31:49.682765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.584 [2024-10-07 11:31:49.682783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.584 [2024-10-07 11:31:49.683034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.584 [2024-10-07 11:31:49.683193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.584 [2024-10-07 11:31:49.683221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.584 [2024-10-07 11:31:49.683236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.584 [2024-10-07 11:31:49.683359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.584 [2024-10-07 11:31:49.686174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.584 [2024-10-07 11:31:49.686295] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.584 [2024-10-07 11:31:49.686340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.584 [2024-10-07 11:31:49.686359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.584 [2024-10-07 11:31:49.686392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.584 [2024-10-07 11:31:49.686425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.584 [2024-10-07 11:31:49.686443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.584 [2024-10-07 11:31:49.686457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.584 [2024-10-07 11:31:49.686487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.584 [2024-10-07 11:31:49.692709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.584 [2024-10-07 11:31:49.692829] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.584 [2024-10-07 11:31:49.692860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.584 [2024-10-07 11:31:49.692878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.584 [2024-10-07 11:31:49.692929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.584 [2024-10-07 11:31:49.692962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.585 [2024-10-07 11:31:49.692980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.585 [2024-10-07 11:31:49.692994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.585 [2024-10-07 11:31:49.693025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.585 [2024-10-07 11:31:49.696548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.585 [2024-10-07 11:31:49.696662] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.585 [2024-10-07 11:31:49.696702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.585 [2024-10-07 11:31:49.696719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.585 [2024-10-07 11:31:49.696970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.585 [2024-10-07 11:31:49.697130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.585 [2024-10-07 11:31:49.697162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.585 [2024-10-07 11:31:49.697179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.585 [2024-10-07 11:31:49.697285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.585 [2024-10-07 11:31:49.703413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.585 [2024-10-07 11:31:49.703529] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.585 [2024-10-07 11:31:49.703561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.585 [2024-10-07 11:31:49.703578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.585 [2024-10-07 11:31:49.703611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.585 [2024-10-07 11:31:49.703643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.585 [2024-10-07 11:31:49.703661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.585 [2024-10-07 11:31:49.703676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.585 [2024-10-07 11:31:49.703706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.585 [2024-10-07 11:31:49.706636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.585 [2024-10-07 11:31:49.706744] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.585 [2024-10-07 11:31:49.706775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.585 [2024-10-07 11:31:49.706792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.585 [2024-10-07 11:31:49.706824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.585 [2024-10-07 11:31:49.706858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.585 [2024-10-07 11:31:49.706877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.585 [2024-10-07 11:31:49.706910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.585 [2024-10-07 11:31:49.706942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.585 [2024-10-07 11:31:49.713965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.585 [2024-10-07 11:31:49.714099] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.585 [2024-10-07 11:31:49.714132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.585 [2024-10-07 11:31:49.714150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.585 [2024-10-07 11:31:49.714183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.585 [2024-10-07 11:31:49.714215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.585 [2024-10-07 11:31:49.714234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.585 [2024-10-07 11:31:49.714248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.585 [2024-10-07 11:31:49.714279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.585 [2024-10-07 11:31:49.717355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.585 [2024-10-07 11:31:49.717465] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.585 [2024-10-07 11:31:49.717497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.585 [2024-10-07 11:31:49.717515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.585 [2024-10-07 11:31:49.717548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.585 [2024-10-07 11:31:49.717580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.585 [2024-10-07 11:31:49.717599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.585 [2024-10-07 11:31:49.717614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.585 [2024-10-07 11:31:49.717644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.585 [2024-10-07 11:31:49.724515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.585 [2024-10-07 11:31:49.724648] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.585 [2024-10-07 11:31:49.724681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.585 [2024-10-07 11:31:49.724699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.585 [2024-10-07 11:31:49.724954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.585 [2024-10-07 11:31:49.725103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.585 [2024-10-07 11:31:49.725139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.585 [2024-10-07 11:31:49.725158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.585 [2024-10-07 11:31:49.725267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.585 [2024-10-07 11:31:49.728052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.585 [2024-10-07 11:31:49.728164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.585 [2024-10-07 11:31:49.728222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.585 [2024-10-07 11:31:49.728241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.585 [2024-10-07 11:31:49.728274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.585 [2024-10-07 11:31:49.728307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.585 [2024-10-07 11:31:49.728340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.585 [2024-10-07 11:31:49.728356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.585 [2024-10-07 11:31:49.728388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.585 [2024-10-07 11:31:49.734615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.585 [2024-10-07 11:31:49.734730] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.585 [2024-10-07 11:31:49.734762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.585 [2024-10-07 11:31:49.734779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.585 [2024-10-07 11:31:49.734811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.585 [2024-10-07 11:31:49.734847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.585 [2024-10-07 11:31:49.734865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.585 [2024-10-07 11:31:49.734879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.585 [2024-10-07 11:31:49.734910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.585 [2024-10-07 11:31:49.738382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.585 [2024-10-07 11:31:49.738492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.585 [2024-10-07 11:31:49.738523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.585 [2024-10-07 11:31:49.738544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.585 [2024-10-07 11:31:49.738796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.585 [2024-10-07 11:31:49.738947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.585 [2024-10-07 11:31:49.738982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.585 [2024-10-07 11:31:49.739001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.585 [2024-10-07 11:31:49.739114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.585 [2024-10-07 11:31:49.745254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.585 [2024-10-07 11:31:49.745383] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.585 [2024-10-07 11:31:49.745415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.585 [2024-10-07 11:31:49.745433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.585 [2024-10-07 11:31:49.745465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.585 [2024-10-07 11:31:49.745521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.585 [2024-10-07 11:31:49.745541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.585 [2024-10-07 11:31:49.745555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.585 [2024-10-07 11:31:49.745587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.585 [2024-10-07 11:31:49.748472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.585 [2024-10-07 11:31:49.748583] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.585 [2024-10-07 11:31:49.748614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.585 [2024-10-07 11:31:49.748631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.586 [2024-10-07 11:31:49.748663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.586 [2024-10-07 11:31:49.748695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.586 [2024-10-07 11:31:49.748713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.586 [2024-10-07 11:31:49.748727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.586 [2024-10-07 11:31:49.748757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.586 [2024-10-07 11:31:49.755771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.586 [2024-10-07 11:31:49.755887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.586 [2024-10-07 11:31:49.755919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.586 [2024-10-07 11:31:49.755937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.586 [2024-10-07 11:31:49.755969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.586 [2024-10-07 11:31:49.756003] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.586 [2024-10-07 11:31:49.756021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.586 [2024-10-07 11:31:49.756035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.586 [2024-10-07 11:31:49.756066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.586 [2024-10-07 11:31:49.759098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.586 [2024-10-07 11:31:49.759212] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.586 [2024-10-07 11:31:49.759243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.586 [2024-10-07 11:31:49.759261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.586 [2024-10-07 11:31:49.759293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.586 [2024-10-07 11:31:49.759342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.586 [2024-10-07 11:31:49.759363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.586 [2024-10-07 11:31:49.759378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.586 [2024-10-07 11:31:49.759426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.586 [2024-10-07 11:31:49.766161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.586 [2024-10-07 11:31:49.766273] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.586 [2024-10-07 11:31:49.766335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.586 [2024-10-07 11:31:49.766355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.586 [2024-10-07 11:31:49.766609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.586 [2024-10-07 11:31:49.766769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.586 [2024-10-07 11:31:49.766804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.586 [2024-10-07 11:31:49.766821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.586 [2024-10-07 11:31:49.766929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.586 [2024-10-07 11:31:49.769786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.586 [2024-10-07 11:31:49.769896] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.586 [2024-10-07 11:31:49.769928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.586 [2024-10-07 11:31:49.769945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.586 [2024-10-07 11:31:49.769977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.586 [2024-10-07 11:31:49.770009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.586 [2024-10-07 11:31:49.770027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.586 [2024-10-07 11:31:49.770041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.586 [2024-10-07 11:31:49.770071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.586 [2024-10-07 11:31:49.776273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.586 [2024-10-07 11:31:49.776397] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.586 [2024-10-07 11:31:49.776429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.586 [2024-10-07 11:31:49.776446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.586 [2024-10-07 11:31:49.776478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.586 [2024-10-07 11:31:49.776509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.586 [2024-10-07 11:31:49.776527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.586 [2024-10-07 11:31:49.776542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.586 [2024-10-07 11:31:49.776572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.586 [2024-10-07 11:31:49.780139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.586 [2024-10-07 11:31:49.780251] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.586 [2024-10-07 11:31:49.780282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.586 [2024-10-07 11:31:49.780331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.586 [2024-10-07 11:31:49.780589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.586 [2024-10-07 11:31:49.780749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.586 [2024-10-07 11:31:49.780784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.586 [2024-10-07 11:31:49.780801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.586 [2024-10-07 11:31:49.780909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.586 [2024-10-07 11:31:49.786974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.586 [2024-10-07 11:31:49.787090] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.586 [2024-10-07 11:31:49.787122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.586 [2024-10-07 11:31:49.787140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.586 [2024-10-07 11:31:49.787171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.586 [2024-10-07 11:31:49.787204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.586 [2024-10-07 11:31:49.787222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.586 [2024-10-07 11:31:49.787236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.586 [2024-10-07 11:31:49.787268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.586 [2024-10-07 11:31:49.790227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.586 [2024-10-07 11:31:49.790367] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.586 [2024-10-07 11:31:49.790401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.586 [2024-10-07 11:31:49.790418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.586 [2024-10-07 11:31:49.790451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.586 [2024-10-07 11:31:49.790483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.586 [2024-10-07 11:31:49.790501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.586 [2024-10-07 11:31:49.790516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.586 [2024-10-07 11:31:49.790546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.586 [2024-10-07 11:31:49.797642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.586 [2024-10-07 11:31:49.797789] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.586 [2024-10-07 11:31:49.797823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.586 [2024-10-07 11:31:49.797842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.586 [2024-10-07 11:31:49.797875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.586 [2024-10-07 11:31:49.797914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.586 [2024-10-07 11:31:49.797962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.586 [2024-10-07 11:31:49.797978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.586 [2024-10-07 11:31:49.798010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.586 [2024-10-07 11:31:49.801024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.587 [2024-10-07 11:31:49.801136] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.587 [2024-10-07 11:31:49.801168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.587 [2024-10-07 11:31:49.801186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.587 [2024-10-07 11:31:49.801219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.587 [2024-10-07 11:31:49.801252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.587 [2024-10-07 11:31:49.801270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.587 [2024-10-07 11:31:49.801285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.587 [2024-10-07 11:31:49.801328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.587 [2024-10-07 11:31:49.808090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.587 [2024-10-07 11:31:49.808227] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.587 [2024-10-07 11:31:49.808260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.587 [2024-10-07 11:31:49.808278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.587 [2024-10-07 11:31:49.808554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.587 [2024-10-07 11:31:49.808706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.587 [2024-10-07 11:31:49.808742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.587 [2024-10-07 11:31:49.808760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.587 [2024-10-07 11:31:49.808872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.587 [2024-10-07 11:31:49.811688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.587 [2024-10-07 11:31:49.811801] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.587 [2024-10-07 11:31:49.811832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.587 [2024-10-07 11:31:49.811849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.587 [2024-10-07 11:31:49.811881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.587 [2024-10-07 11:31:49.811913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.587 [2024-10-07 11:31:49.811931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.587 [2024-10-07 11:31:49.811946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.587 [2024-10-07 11:31:49.811976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.587 [2024-10-07 11:31:49.818190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.587 [2024-10-07 11:31:49.818329] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.587 [2024-10-07 11:31:49.818364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.587 [2024-10-07 11:31:49.818382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.587 [2024-10-07 11:31:49.818416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.587 [2024-10-07 11:31:49.818449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.587 [2024-10-07 11:31:49.818467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.587 [2024-10-07 11:31:49.818489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.587 [2024-10-07 11:31:49.818520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.587 [2024-10-07 11:31:49.822003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.587 [2024-10-07 11:31:49.822129] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.587 [2024-10-07 11:31:49.822161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.587 [2024-10-07 11:31:49.822179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.587 [2024-10-07 11:31:49.822461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.587 [2024-10-07 11:31:49.822647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.587 [2024-10-07 11:31:49.822679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.587 [2024-10-07 11:31:49.822696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.587 [2024-10-07 11:31:49.822802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.587 [2024-10-07 11:31:49.828919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.587 [2024-10-07 11:31:49.829039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.587 [2024-10-07 11:31:49.829072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.587 [2024-10-07 11:31:49.829089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.587 [2024-10-07 11:31:49.829122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.587 [2024-10-07 11:31:49.829154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.587 [2024-10-07 11:31:49.829172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.587 [2024-10-07 11:31:49.829186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.587 [2024-10-07 11:31:49.829216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.587 [2024-10-07 11:31:49.832089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.587 [2024-10-07 11:31:49.832201] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.587 [2024-10-07 11:31:49.832233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.587 [2024-10-07 11:31:49.832250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.587 [2024-10-07 11:31:49.832302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.587 [2024-10-07 11:31:49.832351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.587 [2024-10-07 11:31:49.832370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.587 [2024-10-07 11:31:49.832385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.587 [2024-10-07 11:31:49.832414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.587 [2024-10-07 11:31:49.839524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.587 [2024-10-07 11:31:49.839640] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.587 [2024-10-07 11:31:49.839671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.587 [2024-10-07 11:31:49.839689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.587 [2024-10-07 11:31:49.839721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.587 [2024-10-07 11:31:49.839753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.587 [2024-10-07 11:31:49.839771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.587 [2024-10-07 11:31:49.839785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.587 [2024-10-07 11:31:49.839816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.587 [2024-10-07 11:31:49.842870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.587 [2024-10-07 11:31:49.842987] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.587 [2024-10-07 11:31:49.843019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.587 [2024-10-07 11:31:49.843037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.587 [2024-10-07 11:31:49.843069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.587 [2024-10-07 11:31:49.843102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.587 [2024-10-07 11:31:49.843120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.587 [2024-10-07 11:31:49.843135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.587 [2024-10-07 11:31:49.843165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.587 [2024-10-07 11:31:49.849844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.587 [2024-10-07 11:31:49.849966] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.587 [2024-10-07 11:31:49.849998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.587 [2024-10-07 11:31:49.850016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.587 [2024-10-07 11:31:49.850267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.587 [2024-10-07 11:31:49.850454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.587 [2024-10-07 11:31:49.850491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.587 [2024-10-07 11:31:49.850525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.587 [2024-10-07 11:31:49.850636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.587 [2024-10-07 11:31:49.853409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.587 [2024-10-07 11:31:49.853521] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.587 [2024-10-07 11:31:49.853552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.587 [2024-10-07 11:31:49.853570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.587 [2024-10-07 11:31:49.853601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.587 [2024-10-07 11:31:49.853634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.587 [2024-10-07 11:31:49.853652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.587 [2024-10-07 11:31:49.853666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.588 [2024-10-07 11:31:49.853697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.588 [2024-10-07 11:31:49.859939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.588 [2024-10-07 11:31:49.860054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.588 [2024-10-07 11:31:49.860086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.588 [2024-10-07 11:31:49.860103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.588 [2024-10-07 11:31:49.860135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.588 [2024-10-07 11:31:49.860166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.588 [2024-10-07 11:31:49.860184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.588 [2024-10-07 11:31:49.860198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.588 [2024-10-07 11:31:49.860229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.588 [2024-10-07 11:31:49.863712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.588 [2024-10-07 11:31:49.863824] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.588 [2024-10-07 11:31:49.863855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.588 [2024-10-07 11:31:49.863873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.588 [2024-10-07 11:31:49.864131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.588 [2024-10-07 11:31:49.864310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.588 [2024-10-07 11:31:49.864358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.588 [2024-10-07 11:31:49.864375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.588 [2024-10-07 11:31:49.864483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.588 [2024-10-07 11:31:49.870564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.588 [2024-10-07 11:31:49.870677] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.588 [2024-10-07 11:31:49.870724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.588 [2024-10-07 11:31:49.870744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.588 [2024-10-07 11:31:49.870777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.588 [2024-10-07 11:31:49.870810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.588 [2024-10-07 11:31:49.870828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.588 [2024-10-07 11:31:49.870842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.588 [2024-10-07 11:31:49.870872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.588 [2024-10-07 11:31:49.873803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.588 [2024-10-07 11:31:49.873913] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.588 [2024-10-07 11:31:49.873944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.588 [2024-10-07 11:31:49.873961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.588 [2024-10-07 11:31:49.873992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.588 [2024-10-07 11:31:49.874025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.588 [2024-10-07 11:31:49.874042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.588 [2024-10-07 11:31:49.874057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.588 [2024-10-07 11:31:49.874087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.588 [2024-10-07 11:31:49.881097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.588 [2024-10-07 11:31:49.881212] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.588 [2024-10-07 11:31:49.881244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.588 [2024-10-07 11:31:49.881262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.588 [2024-10-07 11:31:49.881293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.588 [2024-10-07 11:31:49.881343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.588 [2024-10-07 11:31:49.881364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.588 [2024-10-07 11:31:49.881379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.588 [2024-10-07 11:31:49.881409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.588 [2024-10-07 11:31:49.884424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.588 [2024-10-07 11:31:49.884537] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.588 [2024-10-07 11:31:49.884568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.588 [2024-10-07 11:31:49.884586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.588 [2024-10-07 11:31:49.884617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.588 [2024-10-07 11:31:49.884668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.588 [2024-10-07 11:31:49.884687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.588 [2024-10-07 11:31:49.884701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.588 [2024-10-07 11:31:49.884731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.588 [2024-10-07 11:31:49.891423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.588 [2024-10-07 11:31:49.891537] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.588 [2024-10-07 11:31:49.891568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.588 [2024-10-07 11:31:49.891585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.588 [2024-10-07 11:31:49.891840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.588 [2024-10-07 11:31:49.891987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.588 [2024-10-07 11:31:49.892024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.588 [2024-10-07 11:31:49.892042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.588 [2024-10-07 11:31:49.892150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.588 [2024-10-07 11:31:49.894971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.588 [2024-10-07 11:31:49.895082] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.588 [2024-10-07 11:31:49.895113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.588 [2024-10-07 11:31:49.895131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.588 [2024-10-07 11:31:49.895162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.588 [2024-10-07 11:31:49.895194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.588 [2024-10-07 11:31:49.895212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.588 [2024-10-07 11:31:49.895226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.588 [2024-10-07 11:31:49.895256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.588 [2024-10-07 11:31:49.901513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.588 [2024-10-07 11:31:49.901624] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.588 [2024-10-07 11:31:49.901656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.588 [2024-10-07 11:31:49.901673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.588 [2024-10-07 11:31:49.901704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.588 [2024-10-07 11:31:49.901736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.588 [2024-10-07 11:31:49.901753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.588 [2024-10-07 11:31:49.901767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.588 [2024-10-07 11:31:49.901815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.588 [2024-10-07 11:31:49.905213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.588 [2024-10-07 11:31:49.905857] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.588 [2024-10-07 11:31:49.905903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.588 [2024-10-07 11:31:49.905923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.588 [2024-10-07 11:31:49.906182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.588 [2024-10-07 11:31:49.906359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.588 [2024-10-07 11:31:49.906393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.588 [2024-10-07 11:31:49.906410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.588 [2024-10-07 11:31:49.906518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.588 [2024-10-07 11:31:49.912103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.588 [2024-10-07 11:31:49.912217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.588 [2024-10-07 11:31:49.912249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.588 [2024-10-07 11:31:49.912266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.588 [2024-10-07 11:31:49.912298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.589 [2024-10-07 11:31:49.912348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.589 [2024-10-07 11:31:49.912369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.589 [2024-10-07 11:31:49.912383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.589 [2024-10-07 11:31:49.912414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.589 [2024-10-07 11:31:49.915303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.589 [2024-10-07 11:31:49.915438] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.589 [2024-10-07 11:31:49.915469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.589 [2024-10-07 11:31:49.915486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.589 [2024-10-07 11:31:49.915518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.589 [2024-10-07 11:31:49.915550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.589 [2024-10-07 11:31:49.915568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.589 [2024-10-07 11:31:49.915582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.589 [2024-10-07 11:31:49.915612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.589 [2024-10-07 11:31:49.922596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.589 [2024-10-07 11:31:49.922711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.589 [2024-10-07 11:31:49.922743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.589 [2024-10-07 11:31:49.922786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.589 [2024-10-07 11:31:49.922823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.589 [2024-10-07 11:31:49.922855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.589 [2024-10-07 11:31:49.922873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.589 [2024-10-07 11:31:49.922887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.589 [2024-10-07 11:31:49.922918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.589 [2024-10-07 11:31:49.925912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.589 [2024-10-07 11:31:49.926023] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.589 [2024-10-07 11:31:49.926054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.589 [2024-10-07 11:31:49.926071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.589 [2024-10-07 11:31:49.926103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.589 [2024-10-07 11:31:49.926135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.589 [2024-10-07 11:31:49.926154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.589 [2024-10-07 11:31:49.926168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.589 [2024-10-07 11:31:49.926198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.589 [2024-10-07 11:31:49.932967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.589 [2024-10-07 11:31:49.933101] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.589 [2024-10-07 11:31:49.933132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.589 [2024-10-07 11:31:49.933150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.589 [2024-10-07 11:31:49.933419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.589 [2024-10-07 11:31:49.933576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.589 [2024-10-07 11:31:49.933611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.589 [2024-10-07 11:31:49.933629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.589 [2024-10-07 11:31:49.933737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.589 [2024-10-07 11:31:49.936513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.589 [2024-10-07 11:31:49.936624] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.589 [2024-10-07 11:31:49.936655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.589 [2024-10-07 11:31:49.936672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.589 [2024-10-07 11:31:49.936704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.589 [2024-10-07 11:31:49.936735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.589 [2024-10-07 11:31:49.936770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.589 [2024-10-07 11:31:49.936786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.589 [2024-10-07 11:31:49.936817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.589 [2024-10-07 11:31:49.943062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.589 [2024-10-07 11:31:49.943182] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.589 [2024-10-07 11:31:49.943213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.589 [2024-10-07 11:31:49.943231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.589 [2024-10-07 11:31:49.943263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.589 [2024-10-07 11:31:49.943303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.589 [2024-10-07 11:31:49.943336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.589 [2024-10-07 11:31:49.943352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.589 [2024-10-07 11:31:49.943392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.589 [2024-10-07 11:31:49.946877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.589 [2024-10-07 11:31:49.946989] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.589 [2024-10-07 11:31:49.947020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.589 [2024-10-07 11:31:49.947037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.589 [2024-10-07 11:31:49.947290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.589 [2024-10-07 11:31:49.947452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.589 [2024-10-07 11:31:49.947500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.589 [2024-10-07 11:31:49.947518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.589 [2024-10-07 11:31:49.947626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.589 [2024-10-07 11:31:49.953802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.589 [2024-10-07 11:31:49.953930] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.589 [2024-10-07 11:31:49.953962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.589 [2024-10-07 11:31:49.953981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.589 [2024-10-07 11:31:49.954013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.589 [2024-10-07 11:31:49.954046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.589 [2024-10-07 11:31:49.954064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.589 [2024-10-07 11:31:49.954079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.589 [2024-10-07 11:31:49.954110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.589 [2024-10-07 11:31:49.956998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.589 [2024-10-07 11:31:49.957109] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.589 [2024-10-07 11:31:49.957141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.589 [2024-10-07 11:31:49.957159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.589 [2024-10-07 11:31:49.957192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.589 [2024-10-07 11:31:49.957224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.589 [2024-10-07 11:31:49.957242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.589 [2024-10-07 11:31:49.957257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.589 [2024-10-07 11:31:49.957286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.589 [2024-10-07 11:31:49.964425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.589 [2024-10-07 11:31:49.964541] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.589 [2024-10-07 11:31:49.964573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.589 [2024-10-07 11:31:49.964590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.590 [2024-10-07 11:31:49.964622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.590 [2024-10-07 11:31:49.964654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.590 [2024-10-07 11:31:49.964672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.590 [2024-10-07 11:31:49.964686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.590 [2024-10-07 11:31:49.964721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.590 [2024-10-07 11:31:49.967786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.590 [2024-10-07 11:31:49.967904] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.590 [2024-10-07 11:31:49.967935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.590 [2024-10-07 11:31:49.967952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.590 [2024-10-07 11:31:49.967984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.590 [2024-10-07 11:31:49.968017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.590 [2024-10-07 11:31:49.968035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.590 [2024-10-07 11:31:49.968050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.590 [2024-10-07 11:31:49.968080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.590 [2024-10-07 11:31:49.974849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.590 [2024-10-07 11:31:49.974978] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.590 [2024-10-07 11:31:49.975010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.590 [2024-10-07 11:31:49.975028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.590 [2024-10-07 11:31:49.975304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.590 [2024-10-07 11:31:49.975479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.590 [2024-10-07 11:31:49.975511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.590 [2024-10-07 11:31:49.975528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.590 [2024-10-07 11:31:49.975639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.590 [2024-10-07 11:31:49.978445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.590 [2024-10-07 11:31:49.978558] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.590 [2024-10-07 11:31:49.978590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.590 [2024-10-07 11:31:49.978607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.590 [2024-10-07 11:31:49.978639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.590 [2024-10-07 11:31:49.978670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.590 [2024-10-07 11:31:49.978688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.590 [2024-10-07 11:31:49.978703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.590 [2024-10-07 11:31:49.978733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.590 [2024-10-07 11:31:49.984953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.590 [2024-10-07 11:31:49.985068] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.590 [2024-10-07 11:31:49.985100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.590 [2024-10-07 11:31:49.985117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.590 [2024-10-07 11:31:49.985150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.590 [2024-10-07 11:31:49.985182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.590 [2024-10-07 11:31:49.985199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.590 [2024-10-07 11:31:49.985213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.590 [2024-10-07 11:31:49.985244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.590 [2024-10-07 11:31:49.988833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.590 [2024-10-07 11:31:49.988951] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.590 [2024-10-07 11:31:49.988982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.590 [2024-10-07 11:31:49.988999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.590 [2024-10-07 11:31:49.989251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.590 [2024-10-07 11:31:49.989426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.590 [2024-10-07 11:31:49.989462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.590 [2024-10-07 11:31:49.989499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.590 [2024-10-07 11:31:49.989608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.590 [2024-10-07 11:31:49.995715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.590 [2024-10-07 11:31:49.995828] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.590 [2024-10-07 11:31:49.995860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.590 [2024-10-07 11:31:49.995888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.590 [2024-10-07 11:31:49.995920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.590 [2024-10-07 11:31:49.995952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.590 [2024-10-07 11:31:49.995970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.590 [2024-10-07 11:31:49.995984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.590 [2024-10-07 11:31:49.996014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.590 [2024-10-07 11:31:49.998920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.590 [2024-10-07 11:31:49.999032] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.590 [2024-10-07 11:31:49.999063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.590 [2024-10-07 11:31:49.999081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.590 [2024-10-07 11:31:49.999112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.590 [2024-10-07 11:31:49.999144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.590 [2024-10-07 11:31:49.999162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.590 [2024-10-07 11:31:49.999176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.590 [2024-10-07 11:31:49.999205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.590 [2024-10-07 11:31:50.006348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.590 [2024-10-07 11:31:50.006473] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.590 [2024-10-07 11:31:50.006505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.590 [2024-10-07 11:31:50.006523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.590 [2024-10-07 11:31:50.006556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.590 [2024-10-07 11:31:50.006587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.590 [2024-10-07 11:31:50.006605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.590 [2024-10-07 11:31:50.006621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.590 [2024-10-07 11:31:50.006651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.590 [2024-10-07 11:31:50.009684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.590 [2024-10-07 11:31:50.009830] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.590 [2024-10-07 11:31:50.009862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.590 [2024-10-07 11:31:50.009880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.590 [2024-10-07 11:31:50.009913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.590 [2024-10-07 11:31:50.009946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.590 [2024-10-07 11:31:50.009964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.590 [2024-10-07 11:31:50.009978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.591 [2024-10-07 11:31:50.010008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.591 [2024-10-07 11:31:50.016799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.591 [2024-10-07 11:31:50.016939] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.591 [2024-10-07 11:31:50.016971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.591 [2024-10-07 11:31:50.016990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.591 [2024-10-07 11:31:50.017247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.591 [2024-10-07 11:31:50.017425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.591 [2024-10-07 11:31:50.017461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.591 [2024-10-07 11:31:50.017480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.591 [2024-10-07 11:31:50.017591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.591 [2024-10-07 11:31:50.020446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.591 [2024-10-07 11:31:50.020561] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.591 [2024-10-07 11:31:50.020594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.591 [2024-10-07 11:31:50.020612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.591 [2024-10-07 11:31:50.020644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.591 [2024-10-07 11:31:50.020676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.591 [2024-10-07 11:31:50.020695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.591 [2024-10-07 11:31:50.020720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.591 [2024-10-07 11:31:50.020750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.591 [2024-10-07 11:31:50.027054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.591 [2024-10-07 11:31:50.027198] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.591 [2024-10-07 11:31:50.027230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.591 [2024-10-07 11:31:50.027248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.591 [2024-10-07 11:31:50.027288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.591 [2024-10-07 11:31:50.027365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.591 [2024-10-07 11:31:50.027387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.591 [2024-10-07 11:31:50.027402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.591 [2024-10-07 11:31:50.027434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.591 [2024-10-07 11:31:50.031004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.591 [2024-10-07 11:31:50.031117] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.591 [2024-10-07 11:31:50.031149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.591 [2024-10-07 11:31:50.031167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.591 [2024-10-07 11:31:50.031435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.591 [2024-10-07 11:31:50.031590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.591 [2024-10-07 11:31:50.031626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.591 [2024-10-07 11:31:50.031644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.591 [2024-10-07 11:31:50.031751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.591 [2024-10-07 11:31:50.037940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.591 [2024-10-07 11:31:50.038054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.591 [2024-10-07 11:31:50.038086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.591 [2024-10-07 11:31:50.038104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.591 [2024-10-07 11:31:50.038136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.591 [2024-10-07 11:31:50.038176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.591 [2024-10-07 11:31:50.038195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.591 [2024-10-07 11:31:50.038209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.591 [2024-10-07 11:31:50.038240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.591 [2024-10-07 11:31:50.041155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.591 [2024-10-07 11:31:50.041265] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.591 [2024-10-07 11:31:50.041296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.591 [2024-10-07 11:31:50.041329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.591 [2024-10-07 11:31:50.041366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.591 [2024-10-07 11:31:50.041398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.591 [2024-10-07 11:31:50.041417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.591 [2024-10-07 11:31:50.041432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.591 [2024-10-07 11:31:50.041478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.591 [2024-10-07 11:31:50.048527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.591 [2024-10-07 11:31:50.048642] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.591 [2024-10-07 11:31:50.048674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.591 [2024-10-07 11:31:50.048692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.591 [2024-10-07 11:31:50.048724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.591 [2024-10-07 11:31:50.048756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.591 [2024-10-07 11:31:50.048774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.591 [2024-10-07 11:31:50.048788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.591 [2024-10-07 11:31:50.048819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.591 [2024-10-07 11:31:50.051905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.591 [2024-10-07 11:31:50.052018] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.591 [2024-10-07 11:31:50.052049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.591 [2024-10-07 11:31:50.052066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.591 [2024-10-07 11:31:50.052100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.591 [2024-10-07 11:31:50.052132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.591 [2024-10-07 11:31:50.052150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.591 [2024-10-07 11:31:50.052164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.591 [2024-10-07 11:31:50.052194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.591 [2024-10-07 11:31:50.058917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.591 [2024-10-07 11:31:50.059030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.591 [2024-10-07 11:31:50.059062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.591 [2024-10-07 11:31:50.059080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.591 [2024-10-07 11:31:50.059347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.591 [2024-10-07 11:31:50.059504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.591 [2024-10-07 11:31:50.059539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.591 [2024-10-07 11:31:50.059558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.591 [2024-10-07 11:31:50.059666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.591 [2024-10-07 11:31:50.062500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.591 [2024-10-07 11:31:50.062612] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.591 [2024-10-07 11:31:50.062644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.591 [2024-10-07 11:31:50.062678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.591 [2024-10-07 11:31:50.062712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.591 [2024-10-07 11:31:50.062745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.591 [2024-10-07 11:31:50.062767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.591 [2024-10-07 11:31:50.062781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.591 [2024-10-07 11:31:50.062811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.591 [2024-10-07 11:31:50.069013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.591 [2024-10-07 11:31:50.069141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.591 [2024-10-07 11:31:50.069173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.591 [2024-10-07 11:31:50.069191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.591 [2024-10-07 11:31:50.069224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.591 [2024-10-07 11:31:50.069256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.591 [2024-10-07 11:31:50.069274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.592 [2024-10-07 11:31:50.069288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.592 [2024-10-07 11:31:50.069337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.592 [2024-10-07 11:31:50.072811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.592 [2024-10-07 11:31:50.072926] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.592 [2024-10-07 11:31:50.072957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.592 [2024-10-07 11:31:50.072975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.592 [2024-10-07 11:31:50.073226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.592 [2024-10-07 11:31:50.073392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.592 [2024-10-07 11:31:50.073429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.592 [2024-10-07 11:31:50.073447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.592 [2024-10-07 11:31:50.073555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.592 [2024-10-07 11:31:50.079709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.592 [2024-10-07 11:31:50.079838] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.592 [2024-10-07 11:31:50.079871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.592 [2024-10-07 11:31:50.079889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.592 [2024-10-07 11:31:50.079921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.592 [2024-10-07 11:31:50.079953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.592 [2024-10-07 11:31:50.079988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.592 [2024-10-07 11:31:50.080004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.592 [2024-10-07 11:31:50.080036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.592 [2024-10-07 11:31:50.082908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.592 [2024-10-07 11:31:50.083022] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.592 [2024-10-07 11:31:50.083054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.592 [2024-10-07 11:31:50.083071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.592 [2024-10-07 11:31:50.083107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.592 [2024-10-07 11:31:50.083142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.592 [2024-10-07 11:31:50.083160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.592 [2024-10-07 11:31:50.083174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.592 [2024-10-07 11:31:50.083204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.592 [2024-10-07 11:31:50.090198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.592 [2024-10-07 11:31:50.090341] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.592 [2024-10-07 11:31:50.090374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.592 [2024-10-07 11:31:50.090391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.592 [2024-10-07 11:31:50.090425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.592 [2024-10-07 11:31:50.090458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.592 [2024-10-07 11:31:50.090476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.592 [2024-10-07 11:31:50.090490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.592 [2024-10-07 11:31:50.090521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.592 [2024-10-07 11:31:50.093533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.592 [2024-10-07 11:31:50.093646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.592 [2024-10-07 11:31:50.093677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.592 [2024-10-07 11:31:50.093694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.592 [2024-10-07 11:31:50.093726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.592 [2024-10-07 11:31:50.093758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.592 [2024-10-07 11:31:50.093778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.592 [2024-10-07 11:31:50.093792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.592 [2024-10-07 11:31:50.093822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.592 [2024-10-07 11:31:50.100485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.592 [2024-10-07 11:31:50.100599] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.592 [2024-10-07 11:31:50.100630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.592 [2024-10-07 11:31:50.100647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.592 [2024-10-07 11:31:50.100899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.592 [2024-10-07 11:31:50.101076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.592 [2024-10-07 11:31:50.101113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.592 [2024-10-07 11:31:50.101131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.592 [2024-10-07 11:31:50.101239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.592 [2024-10-07 11:31:50.104030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.592 [2024-10-07 11:31:50.104141] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.592 [2024-10-07 11:31:50.104172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.592 [2024-10-07 11:31:50.104189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.592 [2024-10-07 11:31:50.104221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.592 [2024-10-07 11:31:50.104252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.592 [2024-10-07 11:31:50.104271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.592 [2024-10-07 11:31:50.104285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.592 [2024-10-07 11:31:50.104331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.592 [2024-10-07 11:31:50.110581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.592 [2024-10-07 11:31:50.110695] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.592 [2024-10-07 11:31:50.110727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.592 [2024-10-07 11:31:50.110745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.592 [2024-10-07 11:31:50.110777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.592 [2024-10-07 11:31:50.110809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.592 [2024-10-07 11:31:50.110828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.592 [2024-10-07 11:31:50.110842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.592 [2024-10-07 11:31:50.110872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.592 [2024-10-07 11:31:50.114331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.592 [2024-10-07 11:31:50.114444] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.592 [2024-10-07 11:31:50.114476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.592 [2024-10-07 11:31:50.114494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.592 [2024-10-07 11:31:50.114763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.592 [2024-10-07 11:31:50.114923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.592 [2024-10-07 11:31:50.114955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.592 [2024-10-07 11:31:50.114972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.592 [2024-10-07 11:31:50.115082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.592 [2024-10-07 11:31:50.121180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.592 [2024-10-07 11:31:50.121296] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.592 [2024-10-07 11:31:50.121342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.592 [2024-10-07 11:31:50.121361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.592 [2024-10-07 11:31:50.121393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.592 [2024-10-07 11:31:50.121426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.592 [2024-10-07 11:31:50.121444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.592 [2024-10-07 11:31:50.121459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.592 [2024-10-07 11:31:50.121489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.592 [2024-10-07 11:31:50.124419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.592 [2024-10-07 11:31:50.124533] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.592 [2024-10-07 11:31:50.124565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.592 [2024-10-07 11:31:50.124583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.592 [2024-10-07 11:31:50.124615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.592 [2024-10-07 11:31:50.124648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.593 [2024-10-07 11:31:50.124666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.593 [2024-10-07 11:31:50.124681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.593 [2024-10-07 11:31:50.124711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.593 [2024-10-07 11:31:50.132177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.593 [2024-10-07 11:31:50.132343] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.593 [2024-10-07 11:31:50.132377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.593 [2024-10-07 11:31:50.132396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.593 [2024-10-07 11:31:50.132431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.593 [2024-10-07 11:31:50.132464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.593 [2024-10-07 11:31:50.132483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.593 [2024-10-07 11:31:50.132535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.593 [2024-10-07 11:31:50.132570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.593 [2024-10-07 11:31:50.135608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.593 [2024-10-07 11:31:50.135734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.593 [2024-10-07 11:31:50.135766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.593 [2024-10-07 11:31:50.135784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.593 [2024-10-07 11:31:50.135817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.593 [2024-10-07 11:31:50.135849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.593 [2024-10-07 11:31:50.135868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.593 [2024-10-07 11:31:50.135883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.593 [2024-10-07 11:31:50.135913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.593 [2024-10-07 11:31:50.142824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.593 [2024-10-07 11:31:50.142972] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.593 [2024-10-07 11:31:50.143006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.593 [2024-10-07 11:31:50.143025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.593 [2024-10-07 11:31:50.143287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.593 [2024-10-07 11:31:50.143454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.593 [2024-10-07 11:31:50.143481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.593 [2024-10-07 11:31:50.143498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.593 [2024-10-07 11:31:50.143607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.593 [2024-10-07 11:31:50.146396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.593 [2024-10-07 11:31:50.146510] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.593 [2024-10-07 11:31:50.146541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.593 [2024-10-07 11:31:50.146560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.593 [2024-10-07 11:31:50.146593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.593 [2024-10-07 11:31:50.146625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.593 [2024-10-07 11:31:50.146643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.593 [2024-10-07 11:31:50.146658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.593 [2024-10-07 11:31:50.146689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.593 [2024-10-07 11:31:50.152933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.593 [2024-10-07 11:31:50.153074] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.593 [2024-10-07 11:31:50.153107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.593 [2024-10-07 11:31:50.153125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.593 [2024-10-07 11:31:50.153157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.593 [2024-10-07 11:31:50.153190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.593 [2024-10-07 11:31:50.153208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.593 [2024-10-07 11:31:50.153222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.593 [2024-10-07 11:31:50.153253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.593 [2024-10-07 11:31:50.156823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.593 [2024-10-07 11:31:50.156936] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.593 [2024-10-07 11:31:50.156968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.593 [2024-10-07 11:31:50.156985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.593 [2024-10-07 11:31:50.157236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.593 [2024-10-07 11:31:50.157409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.593 [2024-10-07 11:31:50.157446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.593 [2024-10-07 11:31:50.157464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.593 [2024-10-07 11:31:50.157572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.593 [2024-10-07 11:31:50.163682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.593 [2024-10-07 11:31:50.163798] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.593 [2024-10-07 11:31:50.163829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.593 [2024-10-07 11:31:50.163847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.593 [2024-10-07 11:31:50.163879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.593 [2024-10-07 11:31:50.163911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.593 [2024-10-07 11:31:50.163929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.593 [2024-10-07 11:31:50.163943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.593 [2024-10-07 11:31:50.163973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.593 [2024-10-07 11:31:50.166916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.593 [2024-10-07 11:31:50.167028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.593 [2024-10-07 11:31:50.167059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.593 [2024-10-07 11:31:50.167076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.593 [2024-10-07 11:31:50.167108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.593 [2024-10-07 11:31:50.167157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.593 [2024-10-07 11:31:50.167176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.593 [2024-10-07 11:31:50.167191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.593 [2024-10-07 11:31:50.167220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.593 [2024-10-07 11:31:50.174190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.593 [2024-10-07 11:31:50.174330] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.593 [2024-10-07 11:31:50.174363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.593 [2024-10-07 11:31:50.174381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.593 [2024-10-07 11:31:50.174415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.593 [2024-10-07 11:31:50.174448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.593 [2024-10-07 11:31:50.174465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.593 [2024-10-07 11:31:50.174480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.593 [2024-10-07 11:31:50.174510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.593 [2024-10-07 11:31:50.177551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.594 [2024-10-07 11:31:50.177661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.594 [2024-10-07 11:31:50.177692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.594 [2024-10-07 11:31:50.177709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.594 [2024-10-07 11:31:50.177741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.594 [2024-10-07 11:31:50.177773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.594 [2024-10-07 11:31:50.177792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.594 [2024-10-07 11:31:50.177806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.594 [2024-10-07 11:31:50.177836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.594 [2024-10-07 11:31:50.184557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.594 [2024-10-07 11:31:50.184670] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.594 [2024-10-07 11:31:50.184701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.594 [2024-10-07 11:31:50.184718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.594 [2024-10-07 11:31:50.184985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.594 [2024-10-07 11:31:50.185141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.594 [2024-10-07 11:31:50.185176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.594 [2024-10-07 11:31:50.185194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.594 [2024-10-07 11:31:50.185335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.594 [2024-10-07 11:31:50.188117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.594 [2024-10-07 11:31:50.188228] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.594 [2024-10-07 11:31:50.188260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.594 [2024-10-07 11:31:50.188277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.594 [2024-10-07 11:31:50.188309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.594 [2024-10-07 11:31:50.188358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.594 [2024-10-07 11:31:50.188377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.594 [2024-10-07 11:31:50.188391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.594 [2024-10-07 11:31:50.188421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.594 [2024-10-07 11:31:50.194647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.594 [2024-10-07 11:31:50.194761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.594 [2024-10-07 11:31:50.194792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.594 [2024-10-07 11:31:50.194810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.594 [2024-10-07 11:31:50.194842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.594 [2024-10-07 11:31:50.194874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.594 [2024-10-07 11:31:50.194892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.594 [2024-10-07 11:31:50.194906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.594 [2024-10-07 11:31:50.194937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.594 [2024-10-07 11:31:50.198495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.594 [2024-10-07 11:31:50.198608] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.594 [2024-10-07 11:31:50.198639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.594 [2024-10-07 11:31:50.198656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.594 [2024-10-07 11:31:50.198912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.594 [2024-10-07 11:31:50.199073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.594 [2024-10-07 11:31:50.199108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.594 [2024-10-07 11:31:50.199125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.594 [2024-10-07 11:31:50.199243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.594 [2024-10-07 11:31:50.205298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.594 [2024-10-07 11:31:50.205427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.594 [2024-10-07 11:31:50.205459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.594 [2024-10-07 11:31:50.205494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.594 [2024-10-07 11:31:50.205528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.594 [2024-10-07 11:31:50.205560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.594 [2024-10-07 11:31:50.205578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.594 [2024-10-07 11:31:50.205592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.594 [2024-10-07 11:31:50.205623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.594 [2024-10-07 11:31:50.208584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.594 [2024-10-07 11:31:50.208695] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.594 [2024-10-07 11:31:50.208726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.594 [2024-10-07 11:31:50.208743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.594 [2024-10-07 11:31:50.208774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.594 [2024-10-07 11:31:50.208806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.594 [2024-10-07 11:31:50.208824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.594 [2024-10-07 11:31:50.208837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.594 [2024-10-07 11:31:50.208868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.594 [2024-10-07 11:31:50.215864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.594 [2024-10-07 11:31:50.215979] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.594 [2024-10-07 11:31:50.216011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.594 [2024-10-07 11:31:50.216028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.594 [2024-10-07 11:31:50.216060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.594 [2024-10-07 11:31:50.216092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.594 [2024-10-07 11:31:50.216109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.594 [2024-10-07 11:31:50.216123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.594 [2024-10-07 11:31:50.216154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.594 [2024-10-07 11:31:50.219222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.594 [2024-10-07 11:31:50.219346] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.594 [2024-10-07 11:31:50.219377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.594 [2024-10-07 11:31:50.219394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.594 [2024-10-07 11:31:50.219426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.594 [2024-10-07 11:31:50.219458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.594 [2024-10-07 11:31:50.219493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.594 [2024-10-07 11:31:50.219509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.594 [2024-10-07 11:31:50.219540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.594 [2024-10-07 11:31:50.226176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.594 [2024-10-07 11:31:50.226301] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.594 [2024-10-07 11:31:50.226347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.594 [2024-10-07 11:31:50.226366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.594 [2024-10-07 11:31:50.226620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.594 [2024-10-07 11:31:50.226787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.594 [2024-10-07 11:31:50.226822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.594 [2024-10-07 11:31:50.226840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.594 [2024-10-07 11:31:50.226948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.594 [2024-10-07 11:31:50.229730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.594 [2024-10-07 11:31:50.229840] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.594 [2024-10-07 11:31:50.229872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.594 [2024-10-07 11:31:50.229889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.595 [2024-10-07 11:31:50.229920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.595 [2024-10-07 11:31:50.229952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.595 [2024-10-07 11:31:50.229970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.595 [2024-10-07 11:31:50.229984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.595 [2024-10-07 11:31:50.230014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.595 [2024-10-07 11:31:50.236265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.595 [2024-10-07 11:31:50.236389] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.595 [2024-10-07 11:31:50.236421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.595 [2024-10-07 11:31:50.236438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.595 [2024-10-07 11:31:50.236470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.595 [2024-10-07 11:31:50.236502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.595 [2024-10-07 11:31:50.236519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.595 [2024-10-07 11:31:50.236534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.595 [2024-10-07 11:31:50.236565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.595 [2024-10-07 11:31:50.240006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.595 [2024-10-07 11:31:50.240122] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.595 [2024-10-07 11:31:50.240153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.595 [2024-10-07 11:31:50.240171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.595 [2024-10-07 11:31:50.240453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.595 [2024-10-07 11:31:50.240610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.595 [2024-10-07 11:31:50.240646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.595 [2024-10-07 11:31:50.240664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.595 [2024-10-07 11:31:50.240772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.595 [2024-10-07 11:31:50.247202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.595 [2024-10-07 11:31:50.247401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.595 [2024-10-07 11:31:50.247450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.595 [2024-10-07 11:31:50.247479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.595 [2024-10-07 11:31:50.247529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.595 [2024-10-07 11:31:50.247577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.595 [2024-10-07 11:31:50.247605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.595 [2024-10-07 11:31:50.247629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.595 [2024-10-07 11:31:50.247676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.595 [2024-10-07 11:31:50.251797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.595 [2024-10-07 11:31:50.252001] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.595 [2024-10-07 11:31:50.252036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.595 [2024-10-07 11:31:50.252054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.595 [2024-10-07 11:31:50.253337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.595 [2024-10-07 11:31:50.254058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.595 [2024-10-07 11:31:50.254097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.595 [2024-10-07 11:31:50.254116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.595 [2024-10-07 11:31:50.254502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.595 [2024-10-07 11:31:50.257734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.595 [2024-10-07 11:31:50.257890] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.595 [2024-10-07 11:31:50.257933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.595 [2024-10-07 11:31:50.257967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.595 [2024-10-07 11:31:50.258039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.595 [2024-10-07 11:31:50.258086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.595 [2024-10-07 11:31:50.258113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.595 [2024-10-07 11:31:50.258136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.595 [2024-10-07 11:31:50.258203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.595 [2024-10-07 11:31:50.264378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.595 [2024-10-07 11:31:50.265551] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.595 [2024-10-07 11:31:50.265620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.595 [2024-10-07 11:31:50.265655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.595 [2024-10-07 11:31:50.265889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.595 [2024-10-07 11:31:50.266042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.595 [2024-10-07 11:31:50.266088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.595 [2024-10-07 11:31:50.266121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.595 [2024-10-07 11:31:50.267709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.595 [2024-10-07 11:31:50.268963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.595 [2024-10-07 11:31:50.269090] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.595 [2024-10-07 11:31:50.269124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.595 [2024-10-07 11:31:50.269141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.595 [2024-10-07 11:31:50.269175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.595 [2024-10-07 11:31:50.269208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.595 [2024-10-07 11:31:50.269226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.595 [2024-10-07 11:31:50.269240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.595 [2024-10-07 11:31:50.270544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.595 [2024-10-07 11:31:50.274648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.595 [2024-10-07 11:31:50.274764] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.595 [2024-10-07 11:31:50.274795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.595 [2024-10-07 11:31:50.274812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.595 [2024-10-07 11:31:50.274848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.595 [2024-10-07 11:31:50.274879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.595 [2024-10-07 11:31:50.274897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.595 [2024-10-07 11:31:50.274935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.595 [2024-10-07 11:31:50.274970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.595 [2024-10-07 11:31:50.279184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.595 [2024-10-07 11:31:50.279300] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.595 [2024-10-07 11:31:50.279346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.595 [2024-10-07 11:31:50.279365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.595 [2024-10-07 11:31:50.279399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.595 [2024-10-07 11:31:50.279432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.595 [2024-10-07 11:31:50.279450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.596 [2024-10-07 11:31:50.279464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.596 [2024-10-07 11:31:50.279494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.596 [2024-10-07 11:31:50.285165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.596 [2024-10-07 11:31:50.285281] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.596 [2024-10-07 11:31:50.285313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.596 [2024-10-07 11:31:50.285349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.596 [2024-10-07 11:31:50.285382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.596 [2024-10-07 11:31:50.285414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.596 [2024-10-07 11:31:50.285432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.596 [2024-10-07 11:31:50.285447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.596 [2024-10-07 11:31:50.285477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.596 [2024-10-07 11:31:50.289275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.596 [2024-10-07 11:31:50.289400] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.596 [2024-10-07 11:31:50.289433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.596 [2024-10-07 11:31:50.289450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.596 [2024-10-07 11:31:50.290633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.596 [2024-10-07 11:31:50.290873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.596 [2024-10-07 11:31:50.290900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.596 [2024-10-07 11:31:50.290915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.596 [2024-10-07 11:31:50.291655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.596 [2024-10-07 11:31:50.295432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.596 [2024-10-07 11:31:50.295564] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.596 [2024-10-07 11:31:50.295596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.596 [2024-10-07 11:31:50.295614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.596 [2024-10-07 11:31:50.295869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.596 [2024-10-07 11:31:50.296032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.596 [2024-10-07 11:31:50.296068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.596 [2024-10-07 11:31:50.296086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.596 [2024-10-07 11:31:50.296195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.596 [2024-10-07 11:31:50.299380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.596 [2024-10-07 11:31:50.299493] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.596 [2024-10-07 11:31:50.299525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.596 [2024-10-07 11:31:50.299542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.596 [2024-10-07 11:31:50.299573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.596 [2024-10-07 11:31:50.299605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.596 [2024-10-07 11:31:50.299623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.596 [2024-10-07 11:31:50.299637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.596 [2024-10-07 11:31:50.299667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.596 [2024-10-07 11:31:50.305540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.596 [2024-10-07 11:31:50.305654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.596 [2024-10-07 11:31:50.305685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.596 [2024-10-07 11:31:50.305711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.596 [2024-10-07 11:31:50.305742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.596 [2024-10-07 11:31:50.305774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.596 [2024-10-07 11:31:50.305793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.596 [2024-10-07 11:31:50.305807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.596 [2024-10-07 11:31:50.306558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.596 [2024-10-07 11:31:50.309788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.596 [2024-10-07 11:31:50.309920] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.596 [2024-10-07 11:31:50.309951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.596 [2024-10-07 11:31:50.309969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.596 [2024-10-07 11:31:50.310001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.596 [2024-10-07 11:31:50.310051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.596 [2024-10-07 11:31:50.310071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.596 [2024-10-07 11:31:50.310085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.596 [2024-10-07 11:31:50.310116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.596 [2024-10-07 11:31:50.316046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.596 [2024-10-07 11:31:50.316162] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.596 [2024-10-07 11:31:50.316194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.596 [2024-10-07 11:31:50.316210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.596 [2024-10-07 11:31:50.316242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.596 [2024-10-07 11:31:50.316274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.596 [2024-10-07 11:31:50.316292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.596 [2024-10-07 11:31:50.316307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.596 [2024-10-07 11:31:50.316353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.596 [2024-10-07 11:31:50.320592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.596 [2024-10-07 11:31:50.320706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.596 [2024-10-07 11:31:50.320737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.596 [2024-10-07 11:31:50.320754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.596 [2024-10-07 11:31:50.320794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.596 [2024-10-07 11:31:50.320825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.596 [2024-10-07 11:31:50.320843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.596 [2024-10-07 11:31:50.320857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.596 [2024-10-07 11:31:50.320888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.596 [2024-10-07 11:31:50.326558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.596 [2024-10-07 11:31:50.326672] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.596 [2024-10-07 11:31:50.326704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.596 [2024-10-07 11:31:50.326721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.596 [2024-10-07 11:31:50.326753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.596 [2024-10-07 11:31:50.326785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.596 [2024-10-07 11:31:50.326803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.596 [2024-10-07 11:31:50.326817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.596 [2024-10-07 11:31:50.326865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.596 [2024-10-07 11:31:50.330685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.596 [2024-10-07 11:31:50.330810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.596 [2024-10-07 11:31:50.330842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.596 [2024-10-07 11:31:50.330859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.596 [2024-10-07 11:31:50.332025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.596 [2024-10-07 11:31:50.332283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.596 [2024-10-07 11:31:50.332337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.596 [2024-10-07 11:31:50.332357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.596 [2024-10-07 11:31:50.333105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.596 [2024-10-07 11:31:50.336823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.596 [2024-10-07 11:31:50.336938] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.596 [2024-10-07 11:31:50.336970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.596 [2024-10-07 11:31:50.336988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.596 [2024-10-07 11:31:50.337239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.597 [2024-10-07 11:31:50.337404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.597 [2024-10-07 11:31:50.337448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.597 [2024-10-07 11:31:50.337467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.597 [2024-10-07 11:31:50.337575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.597 [2024-10-07 11:31:50.340784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.597 [2024-10-07 11:31:50.340894] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.597 [2024-10-07 11:31:50.340925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.597 [2024-10-07 11:31:50.340942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.597 [2024-10-07 11:31:50.340973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.597 [2024-10-07 11:31:50.341005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.597 [2024-10-07 11:31:50.341024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.597 [2024-10-07 11:31:50.341038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.597 [2024-10-07 11:31:50.341068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.597 [2024-10-07 11:31:50.346917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.597 [2024-10-07 11:31:50.347030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.597 [2024-10-07 11:31:50.347062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.597 [2024-10-07 11:31:50.347096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.597 [2024-10-07 11:31:50.347129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.597 [2024-10-07 11:31:50.347161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.597 [2024-10-07 11:31:50.347179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.597 [2024-10-07 11:31:50.347194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.597 [2024-10-07 11:31:50.347935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.597 [2024-10-07 11:31:50.351160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.597 [2024-10-07 11:31:50.351282] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.597 [2024-10-07 11:31:50.351327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.597 [2024-10-07 11:31:50.351348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.597 [2024-10-07 11:31:50.351381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.597 [2024-10-07 11:31:50.351413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.597 [2024-10-07 11:31:50.351431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.597 [2024-10-07 11:31:50.351445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.597 [2024-10-07 11:31:50.351475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.597 [2024-10-07 11:31:50.357384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.597 [2024-10-07 11:31:50.357498] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.597 [2024-10-07 11:31:50.357529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.597 [2024-10-07 11:31:50.357547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.597 [2024-10-07 11:31:50.357579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.597 [2024-10-07 11:31:50.357611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.597 [2024-10-07 11:31:50.357629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.597 [2024-10-07 11:31:50.357643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.597 [2024-10-07 11:31:50.357673] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.597 [2024-10-07 11:31:50.361885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.597 [2024-10-07 11:31:50.361998] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.597 [2024-10-07 11:31:50.362029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.597 [2024-10-07 11:31:50.362046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.597 [2024-10-07 11:31:50.362078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.597 [2024-10-07 11:31:50.362127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.597 [2024-10-07 11:31:50.362163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.597 [2024-10-07 11:31:50.362179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.597 [2024-10-07 11:31:50.362211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.597 [2024-10-07 11:31:50.367856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.597 [2024-10-07 11:31:50.367970] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.597 [2024-10-07 11:31:50.368001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.597 [2024-10-07 11:31:50.368019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.597 [2024-10-07 11:31:50.368050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.597 [2024-10-07 11:31:50.368081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.597 [2024-10-07 11:31:50.368100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.597 [2024-10-07 11:31:50.368115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.597 [2024-10-07 11:31:50.368145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.597 [2024-10-07 11:31:50.371974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.597 [2024-10-07 11:31:50.372085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.597 [2024-10-07 11:31:50.372116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.597 [2024-10-07 11:31:50.372133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.597 [2024-10-07 11:31:50.373297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.597 [2024-10-07 11:31:50.373553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.597 [2024-10-07 11:31:50.373589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.597 [2024-10-07 11:31:50.373606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.597 [2024-10-07 11:31:50.374360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.597 [2024-10-07 11:31:50.378098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.597 [2024-10-07 11:31:50.378210] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.597 [2024-10-07 11:31:50.378242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.597 [2024-10-07 11:31:50.378260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.597 [2024-10-07 11:31:50.378546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.597 [2024-10-07 11:31:50.378696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.597 [2024-10-07 11:31:50.378723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.597 [2024-10-07 11:31:50.378737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.597 [2024-10-07 11:31:50.378844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.597 [2024-10-07 11:31:50.382066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.597 [2024-10-07 11:31:50.382178] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.597 [2024-10-07 11:31:50.382210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.597 [2024-10-07 11:31:50.382227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.597 [2024-10-07 11:31:50.382259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.597 [2024-10-07 11:31:50.382304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.597 [2024-10-07 11:31:50.382339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.597 [2024-10-07 11:31:50.382355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.597 [2024-10-07 11:31:50.382395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.597 [2024-10-07 11:31:50.388224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.597 [2024-10-07 11:31:50.388359] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.597 [2024-10-07 11:31:50.388392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.597 [2024-10-07 11:31:50.388410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.597 [2024-10-07 11:31:50.388443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.597 [2024-10-07 11:31:50.388475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.597 [2024-10-07 11:31:50.388494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.597 [2024-10-07 11:31:50.388509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.597 [2024-10-07 11:31:50.388539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.597 [2024-10-07 11:31:50.392177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.597 [2024-10-07 11:31:50.392296] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.597 [2024-10-07 11:31:50.392340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.597 [2024-10-07 11:31:50.392360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.598 [2024-10-07 11:31:50.392614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.598 [2024-10-07 11:31:50.392772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.598 [2024-10-07 11:31:50.392808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.598 [2024-10-07 11:31:50.392826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.598 [2024-10-07 11:31:50.392935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.598 [2024-10-07 11:31:50.399088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.598 [2024-10-07 11:31:50.399203] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.598 [2024-10-07 11:31:50.399235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.598 [2024-10-07 11:31:50.399253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.598 [2024-10-07 11:31:50.399311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.598 [2024-10-07 11:31:50.399361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.598 [2024-10-07 11:31:50.399381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.598 [2024-10-07 11:31:50.399395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.598 [2024-10-07 11:31:50.399425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.598 [2024-10-07 11:31:50.402266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.598 [2024-10-07 11:31:50.402405] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.598 [2024-10-07 11:31:50.402437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.598 [2024-10-07 11:31:50.402454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.598 [2024-10-07 11:31:50.402485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.598 [2024-10-07 11:31:50.402517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.598 [2024-10-07 11:31:50.402535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.598 [2024-10-07 11:31:50.402549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.598 [2024-10-07 11:31:50.402580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.598 [2024-10-07 11:31:50.409648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.598 [2024-10-07 11:31:50.409761] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.598 [2024-10-07 11:31:50.409793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.598 [2024-10-07 11:31:50.409810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.598 [2024-10-07 11:31:50.409842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.598 [2024-10-07 11:31:50.409874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.598 [2024-10-07 11:31:50.409892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.598 [2024-10-07 11:31:50.409906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.598 [2024-10-07 11:31:50.409936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.598 [2024-10-07 11:31:50.412962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.598 [2024-10-07 11:31:50.413076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.598 [2024-10-07 11:31:50.413107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.598 [2024-10-07 11:31:50.413125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.598 [2024-10-07 11:31:50.413157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.598 [2024-10-07 11:31:50.413190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.598 [2024-10-07 11:31:50.413208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.598 [2024-10-07 11:31:50.413239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.598 [2024-10-07 11:31:50.413273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.598 [2024-10-07 11:31:50.419987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.598 [2024-10-07 11:31:50.420107] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.598 [2024-10-07 11:31:50.420138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.598 [2024-10-07 11:31:50.420155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.598 [2024-10-07 11:31:50.420422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.598 [2024-10-07 11:31:50.420585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.598 [2024-10-07 11:31:50.420612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.598 [2024-10-07 11:31:50.420627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.598 [2024-10-07 11:31:50.420733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.598 [2024-10-07 11:31:50.423536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.598 [2024-10-07 11:31:50.423650] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.598 [2024-10-07 11:31:50.423682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.598 [2024-10-07 11:31:50.423699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.598 [2024-10-07 11:31:50.423731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.598 [2024-10-07 11:31:50.423763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.598 [2024-10-07 11:31:50.423780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.598 [2024-10-07 11:31:50.423794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.598 [2024-10-07 11:31:50.423825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.598 [2024-10-07 11:31:50.430084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.598 [2024-10-07 11:31:50.430197] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.598 [2024-10-07 11:31:50.430229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.598 [2024-10-07 11:31:50.430246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.598 [2024-10-07 11:31:50.430278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.598 [2024-10-07 11:31:50.430337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.598 [2024-10-07 11:31:50.430358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.598 [2024-10-07 11:31:50.430373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.598 [2024-10-07 11:31:50.430403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.598 [2024-10-07 11:31:50.433843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.598 [2024-10-07 11:31:50.433972] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.598 [2024-10-07 11:31:50.434004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.598 [2024-10-07 11:31:50.434021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.598 [2024-10-07 11:31:50.434272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.598 [2024-10-07 11:31:50.434448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.598 [2024-10-07 11:31:50.434484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.598 [2024-10-07 11:31:50.434500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.598 [2024-10-07 11:31:50.434607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.598 [2024-10-07 11:31:50.440700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.598 [2024-10-07 11:31:50.440813] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.598 [2024-10-07 11:31:50.440845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.598 [2024-10-07 11:31:50.440862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.598 [2024-10-07 11:31:50.440894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.598 [2024-10-07 11:31:50.440925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.598 [2024-10-07 11:31:50.440943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.598 [2024-10-07 11:31:50.440958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.598 [2024-10-07 11:31:50.440987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.598 [2024-10-07 11:31:50.443950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.598 [2024-10-07 11:31:50.444060] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.598 [2024-10-07 11:31:50.444091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.598 [2024-10-07 11:31:50.444108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.598 [2024-10-07 11:31:50.444139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.598 [2024-10-07 11:31:50.444171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.598 [2024-10-07 11:31:50.444189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.598 [2024-10-07 11:31:50.444203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.598 [2024-10-07 11:31:50.444234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.598 [2024-10-07 11:31:50.451284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.598 [2024-10-07 11:31:50.451408] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.598 [2024-10-07 11:31:50.451440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.599 [2024-10-07 11:31:50.451458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.599 [2024-10-07 11:31:50.451490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.599 [2024-10-07 11:31:50.451541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.599 [2024-10-07 11:31:50.451561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.599 [2024-10-07 11:31:50.451575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.599 [2024-10-07 11:31:50.451605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.599 [2024-10-07 11:31:50.454673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.599 [2024-10-07 11:31:50.454787] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.599 [2024-10-07 11:31:50.454818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.599 [2024-10-07 11:31:50.454836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.599 [2024-10-07 11:31:50.454867] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.599 [2024-10-07 11:31:50.454898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.599 [2024-10-07 11:31:50.454917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.599 [2024-10-07 11:31:50.454930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.599 [2024-10-07 11:31:50.454961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.599 [2024-10-07 11:31:50.461695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.599 [2024-10-07 11:31:50.461810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.599 [2024-10-07 11:31:50.461841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.599 [2024-10-07 11:31:50.461859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.599 [2024-10-07 11:31:50.462110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.599 [2024-10-07 11:31:50.462265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.599 [2024-10-07 11:31:50.462312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.599 [2024-10-07 11:31:50.462346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.599 [2024-10-07 11:31:50.462456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.599 [2024-10-07 11:31:50.465222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.599 [2024-10-07 11:31:50.465345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.599 [2024-10-07 11:31:50.465378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.599 [2024-10-07 11:31:50.465396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.599 [2024-10-07 11:31:50.465428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.599 [2024-10-07 11:31:50.465461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.599 [2024-10-07 11:31:50.465479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.599 [2024-10-07 11:31:50.465493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.599 [2024-10-07 11:31:50.465541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.599 [2024-10-07 11:31:50.471782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.599 [2024-10-07 11:31:50.471897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.599 [2024-10-07 11:31:50.471928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.599 [2024-10-07 11:31:50.471946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.599 [2024-10-07 11:31:50.471978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.599 [2024-10-07 11:31:50.472010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.599 [2024-10-07 11:31:50.472028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.599 [2024-10-07 11:31:50.472042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.599 [2024-10-07 11:31:50.472073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.599 [2024-10-07 11:31:50.475547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.599 [2024-10-07 11:31:50.475660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.599 [2024-10-07 11:31:50.475691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.599 [2024-10-07 11:31:50.475709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.599 [2024-10-07 11:31:50.475961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.599 [2024-10-07 11:31:50.476107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.599 [2024-10-07 11:31:50.476143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.599 [2024-10-07 11:31:50.476161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.599 [2024-10-07 11:31:50.476269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.599 [2024-10-07 11:31:50.482466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.599 [2024-10-07 11:31:50.482579] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.599 [2024-10-07 11:31:50.482611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.599 [2024-10-07 11:31:50.482628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.599 [2024-10-07 11:31:50.482660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.599 [2024-10-07 11:31:50.482692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.599 [2024-10-07 11:31:50.482710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.599 [2024-10-07 11:31:50.482725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.599 [2024-10-07 11:31:50.482754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.599 [2024-10-07 11:31:50.485641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.599 [2024-10-07 11:31:50.485740] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.599 [2024-10-07 11:31:50.485770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.599 [2024-10-07 11:31:50.485806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.599 [2024-10-07 11:31:50.485839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.599 [2024-10-07 11:31:50.485871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.599 [2024-10-07 11:31:50.485889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.599 [2024-10-07 11:31:50.485902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.599 [2024-10-07 11:31:50.485932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.599 [2024-10-07 11:31:50.494180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.599 [2024-10-07 11:31:50.494330] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.599 [2024-10-07 11:31:50.494364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.599 [2024-10-07 11:31:50.494382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.599 [2024-10-07 11:31:50.494425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.599 [2024-10-07 11:31:50.494458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.599 [2024-10-07 11:31:50.494476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.599 [2024-10-07 11:31:50.494491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.599 [2024-10-07 11:31:50.494522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.599 [2024-10-07 11:31:50.495719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.599 [2024-10-07 11:31:50.495814] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.599 [2024-10-07 11:31:50.495844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.599 [2024-10-07 11:31:50.495862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.599 [2024-10-07 11:31:50.496693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.599 [2024-10-07 11:31:50.496892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.599 [2024-10-07 11:31:50.496918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.599 [2024-10-07 11:31:50.496933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.599 [2024-10-07 11:31:50.497018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.599 8960.21 IOPS, 35.00 MiB/s [2024-10-07T11:31:53.123Z] [2024-10-07 11:31:50.505560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.600 [2024-10-07 11:31:50.506439] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.600 [2024-10-07 11:31:50.506490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.600 [2024-10-07 11:31:50.506512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.600 [2024-10-07 11:31:50.506639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.600 [2024-10-07 11:31:50.506712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.600 [2024-10-07 11:31:50.506747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.600 [2024-10-07 11:31:50.506764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.600 [2024-10-07 11:31:50.506778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.600 [2024-10-07 11:31:50.506808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.600 [2024-10-07 11:31:50.506871] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.600 [2024-10-07 11:31:50.506899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.600 [2024-10-07 11:31:50.506915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.600 [2024-10-07 11:31:50.507170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.600 [2024-10-07 11:31:50.507335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.600 [2024-10-07 11:31:50.507363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.600 [2024-10-07 11:31:50.507378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.600 [2024-10-07 11:31:50.507488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.600 [2024-10-07 11:31:50.517771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.600 [2024-10-07 11:31:50.517812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.600 [2024-10-07 11:31:50.517992] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.600 [2024-10-07 11:31:50.518026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.600 [2024-10-07 11:31:50.518044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.600 [2024-10-07 11:31:50.518094] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.600 [2024-10-07 11:31:50.518118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.600 [2024-10-07 11:31:50.518133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.600 [2024-10-07 11:31:50.518167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.600 [2024-10-07 11:31:50.518191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.600 [2024-10-07 11:31:50.518218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.600 [2024-10-07 11:31:50.518236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.600 [2024-10-07 11:31:50.518250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.600 [2024-10-07 11:31:50.518267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.600 [2024-10-07 11:31:50.518281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.600 [2024-10-07 11:31:50.518309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.600 [2024-10-07 11:31:50.518359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.600 [2024-10-07 11:31:50.518378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.600 [2024-10-07 11:31:50.527883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.600 [2024-10-07 11:31:50.527958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.600 [2024-10-07 11:31:50.528039] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.600 [2024-10-07 11:31:50.528068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.600 [2024-10-07 11:31:50.528085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.600 [2024-10-07 11:31:50.529030] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.600 [2024-10-07 11:31:50.529074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.600 [2024-10-07 11:31:50.529094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.600 [2024-10-07 11:31:50.529113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.600 [2024-10-07 11:31:50.529305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.600 [2024-10-07 11:31:50.529353] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.600 [2024-10-07 11:31:50.529369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.600 [2024-10-07 11:31:50.529383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.600 [2024-10-07 11:31:50.529426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.600 [2024-10-07 11:31:50.529446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.600 [2024-10-07 11:31:50.529460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.600 [2024-10-07 11:31:50.529474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.600 [2024-10-07 11:31:50.529502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.600 [2024-10-07 11:31:50.539686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.600 [2024-10-07 11:31:50.539746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.600 [2024-10-07 11:31:50.539840] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.600 [2024-10-07 11:31:50.539871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.600 [2024-10-07 11:31:50.539888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.600 [2024-10-07 11:31:50.539938] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.600 [2024-10-07 11:31:50.539961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.600 [2024-10-07 11:31:50.539977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.600 [2024-10-07 11:31:50.540010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.600 [2024-10-07 11:31:50.540033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.600 [2024-10-07 11:31:50.540060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.600 [2024-10-07 11:31:50.540077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.600 [2024-10-07 11:31:50.540114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.600 [2024-10-07 11:31:50.540132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.600 [2024-10-07 11:31:50.540146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.600 [2024-10-07 11:31:50.540160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.600 [2024-10-07 11:31:50.540198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.600 [2024-10-07 11:31:50.540222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.600 [2024-10-07 11:31:50.550020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.600 [2024-10-07 11:31:50.550071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.600 [2024-10-07 11:31:50.550164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.600 [2024-10-07 11:31:50.550195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.600 [2024-10-07 11:31:50.550212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.600 [2024-10-07 11:31:50.550263] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.600 [2024-10-07 11:31:50.550301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.600 [2024-10-07 11:31:50.550334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.600 [2024-10-07 11:31:50.550593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.600 [2024-10-07 11:31:50.550625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.600 [2024-10-07 11:31:50.550758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.600 [2024-10-07 11:31:50.550784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.600 [2024-10-07 11:31:50.550800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.600 [2024-10-07 11:31:50.550817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.600 [2024-10-07 11:31:50.550831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.600 [2024-10-07 11:31:50.550843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.600 [2024-10-07 11:31:50.550949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.600 [2024-10-07 11:31:50.550970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.600 [2024-10-07 11:31:50.560146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.600 [2024-10-07 11:31:50.560222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.600 [2024-10-07 11:31:50.560304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.600 [2024-10-07 11:31:50.560346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.600 [2024-10-07 11:31:50.560363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.600 [2024-10-07 11:31:50.560430] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.600 [2024-10-07 11:31:50.560457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.600 [2024-10-07 11:31:50.560492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.601 [2024-10-07 11:31:50.560512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.601 [2024-10-07 11:31:50.560546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.601 [2024-10-07 11:31:50.560567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.601 [2024-10-07 11:31:50.560582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.601 [2024-10-07 11:31:50.560596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.601 [2024-10-07 11:31:50.561334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.601 [2024-10-07 11:31:50.561361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.601 [2024-10-07 11:31:50.561376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.601 [2024-10-07 11:31:50.561390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.601 [2024-10-07 11:31:50.561559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.601 [2024-10-07 11:31:50.570846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.601 [2024-10-07 11:31:50.570897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.601 [2024-10-07 11:31:50.570990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.601 [2024-10-07 11:31:50.571021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.601 [2024-10-07 11:31:50.571038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.601 [2024-10-07 11:31:50.571085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.601 [2024-10-07 11:31:50.571108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.601 [2024-10-07 11:31:50.571124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.601 [2024-10-07 11:31:50.571156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.601 [2024-10-07 11:31:50.571180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.601 [2024-10-07 11:31:50.571207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.601 [2024-10-07 11:31:50.571224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.601 [2024-10-07 11:31:50.571238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.601 [2024-10-07 11:31:50.571255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.601 [2024-10-07 11:31:50.571269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.601 [2024-10-07 11:31:50.571282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.601 [2024-10-07 11:31:50.571312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.601 [2024-10-07 11:31:50.571345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.601 [2024-10-07 11:31:50.581399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.601 [2024-10-07 11:31:50.581474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.601 [2024-10-07 11:31:50.581574] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.601 [2024-10-07 11:31:50.581605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.601 [2024-10-07 11:31:50.581623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.601 [2024-10-07 11:31:50.581671] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.601 [2024-10-07 11:31:50.581694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.601 [2024-10-07 11:31:50.581709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.601 [2024-10-07 11:31:50.581748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.601 [2024-10-07 11:31:50.581771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.601 [2024-10-07 11:31:50.581799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.601 [2024-10-07 11:31:50.581817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.601 [2024-10-07 11:31:50.581831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.601 [2024-10-07 11:31:50.581848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.601 [2024-10-07 11:31:50.581862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.601 [2024-10-07 11:31:50.581876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.601 [2024-10-07 11:31:50.581905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.601 [2024-10-07 11:31:50.581922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.601 [2024-10-07 11:31:50.591841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.601 [2024-10-07 11:31:50.591907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.601 [2024-10-07 11:31:50.592009] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.601 [2024-10-07 11:31:50.592041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.601 [2024-10-07 11:31:50.592058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.601 [2024-10-07 11:31:50.592108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.601 [2024-10-07 11:31:50.592131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.601 [2024-10-07 11:31:50.592147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.601 [2024-10-07 11:31:50.592427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.601 [2024-10-07 11:31:50.592461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.601 [2024-10-07 11:31:50.592597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.601 [2024-10-07 11:31:50.592624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.601 [2024-10-07 11:31:50.592639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.601 [2024-10-07 11:31:50.592675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.601 [2024-10-07 11:31:50.592692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.601 [2024-10-07 11:31:50.592706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.601 [2024-10-07 11:31:50.592813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.601 [2024-10-07 11:31:50.592834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.601 [2024-10-07 11:31:50.601987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.601 [2024-10-07 11:31:50.602038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.601 [2024-10-07 11:31:50.602132] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.601 [2024-10-07 11:31:50.602163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.601 [2024-10-07 11:31:50.602181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.601 [2024-10-07 11:31:50.602229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.601 [2024-10-07 11:31:50.602252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.601 [2024-10-07 11:31:50.602267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.601 [2024-10-07 11:31:50.602312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.601 [2024-10-07 11:31:50.602353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.601 [2024-10-07 11:31:50.603076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.601 [2024-10-07 11:31:50.603124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.601 [2024-10-07 11:31:50.603143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.601 [2024-10-07 11:31:50.603160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.601 [2024-10-07 11:31:50.603175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.601 [2024-10-07 11:31:50.603188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.601 [2024-10-07 11:31:50.603372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.601 [2024-10-07 11:31:50.603398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.601 [2024-10-07 11:31:50.612613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.601 [2024-10-07 11:31:50.612663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.601 [2024-10-07 11:31:50.612759] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.601 [2024-10-07 11:31:50.612790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.601 [2024-10-07 11:31:50.612808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.601 [2024-10-07 11:31:50.612856] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.601 [2024-10-07 11:31:50.612880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.601 [2024-10-07 11:31:50.612895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.601 [2024-10-07 11:31:50.612946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.601 [2024-10-07 11:31:50.612970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.601 [2024-10-07 11:31:50.612997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.601 [2024-10-07 11:31:50.613015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.601 [2024-10-07 11:31:50.613029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.601 [2024-10-07 11:31:50.613046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.602 [2024-10-07 11:31:50.613060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.602 [2024-10-07 11:31:50.613073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.602 [2024-10-07 11:31:50.613102] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.602 [2024-10-07 11:31:50.613119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.602 [2024-10-07 11:31:50.623264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.602 [2024-10-07 11:31:50.623312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.602 [2024-10-07 11:31:50.623423] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.602 [2024-10-07 11:31:50.623454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.602 [2024-10-07 11:31:50.623471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.602 [2024-10-07 11:31:50.623519] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.602 [2024-10-07 11:31:50.623542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.602 [2024-10-07 11:31:50.623558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.602 [2024-10-07 11:31:50.623591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.602 [2024-10-07 11:31:50.623614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.602 [2024-10-07 11:31:50.623641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.602 [2024-10-07 11:31:50.623659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.602 [2024-10-07 11:31:50.623673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.602 [2024-10-07 11:31:50.623689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.602 [2024-10-07 11:31:50.623703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.602 [2024-10-07 11:31:50.623716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.602 [2024-10-07 11:31:50.623747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.602 [2024-10-07 11:31:50.623764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.602 [2024-10-07 11:31:50.633622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.602 [2024-10-07 11:31:50.633671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.602 [2024-10-07 11:31:50.633786] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.602 [2024-10-07 11:31:50.633817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.602 [2024-10-07 11:31:50.633834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.602 [2024-10-07 11:31:50.633881] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.602 [2024-10-07 11:31:50.633904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.602 [2024-10-07 11:31:50.633920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.602 [2024-10-07 11:31:50.634173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.602 [2024-10-07 11:31:50.634203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.602 [2024-10-07 11:31:50.634375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.602 [2024-10-07 11:31:50.634403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.602 [2024-10-07 11:31:50.634418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.602 [2024-10-07 11:31:50.634436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.602 [2024-10-07 11:31:50.634450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.602 [2024-10-07 11:31:50.634462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.602 [2024-10-07 11:31:50.634568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.602 [2024-10-07 11:31:50.634589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.602 [2024-10-07 11:31:50.643771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.602 [2024-10-07 11:31:50.643820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.602 [2024-10-07 11:31:50.643912] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.602 [2024-10-07 11:31:50.643942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.602 [2024-10-07 11:31:50.643959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.602 [2024-10-07 11:31:50.644008] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.602 [2024-10-07 11:31:50.644030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.602 [2024-10-07 11:31:50.644046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.602 [2024-10-07 11:31:50.644077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.602 [2024-10-07 11:31:50.644100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.602 [2024-10-07 11:31:50.644136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.602 [2024-10-07 11:31:50.644154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.602 [2024-10-07 11:31:50.644170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.602 [2024-10-07 11:31:50.644186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.602 [2024-10-07 11:31:50.644217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.602 [2024-10-07 11:31:50.644232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.602 [2024-10-07 11:31:50.644978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.602 [2024-10-07 11:31:50.645005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.602 [2024-10-07 11:31:50.654621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.602 [2024-10-07 11:31:50.654660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.602 [2024-10-07 11:31:50.654750] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.602 [2024-10-07 11:31:50.654780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.602 [2024-10-07 11:31:50.654798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.602 [2024-10-07 11:31:50.654846] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.602 [2024-10-07 11:31:50.654870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.602 [2024-10-07 11:31:50.654886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.602 [2024-10-07 11:31:50.654918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.602 [2024-10-07 11:31:50.654941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.602 [2024-10-07 11:31:50.654968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.602 [2024-10-07 11:31:50.654986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.602 [2024-10-07 11:31:50.655001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.602 [2024-10-07 11:31:50.655017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.602 [2024-10-07 11:31:50.655031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.602 [2024-10-07 11:31:50.655044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.602 [2024-10-07 11:31:50.655073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.602 [2024-10-07 11:31:50.655090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.602 [2024-10-07 11:31:50.665392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.602 [2024-10-07 11:31:50.665444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.602 [2024-10-07 11:31:50.665539] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.603 [2024-10-07 11:31:50.665570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.603 [2024-10-07 11:31:50.665588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.603 [2024-10-07 11:31:50.665637] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.603 [2024-10-07 11:31:50.665661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.603 [2024-10-07 11:31:50.665687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.603 [2024-10-07 11:31:50.665718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.603 [2024-10-07 11:31:50.665761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.603 [2024-10-07 11:31:50.665790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.603 [2024-10-07 11:31:50.665809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.603 [2024-10-07 11:31:50.665823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.603 [2024-10-07 11:31:50.665839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.603 [2024-10-07 11:31:50.665853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.603 [2024-10-07 11:31:50.665867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.603 [2024-10-07 11:31:50.665897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.603 [2024-10-07 11:31:50.665914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.603 [2024-10-07 11:31:50.675888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.603 [2024-10-07 11:31:50.675945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.603 [2024-10-07 11:31:50.676038] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.603 [2024-10-07 11:31:50.676070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.603 [2024-10-07 11:31:50.676087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.603 [2024-10-07 11:31:50.676135] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.603 [2024-10-07 11:31:50.676159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.603 [2024-10-07 11:31:50.676174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.603 [2024-10-07 11:31:50.676444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.603 [2024-10-07 11:31:50.676476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.603 [2024-10-07 11:31:50.676619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.603 [2024-10-07 11:31:50.676645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.603 [2024-10-07 11:31:50.676660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.603 [2024-10-07 11:31:50.676676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.603 [2024-10-07 11:31:50.676690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.603 [2024-10-07 11:31:50.676704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.603 [2024-10-07 11:31:50.676809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.603 [2024-10-07 11:31:50.676829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.603 [2024-10-07 11:31:50.686039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.603 [2024-10-07 11:31:50.686116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.603 [2024-10-07 11:31:50.686229] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.603 [2024-10-07 11:31:50.686304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.603 [2024-10-07 11:31:50.686341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.603 [2024-10-07 11:31:50.686397] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.603 [2024-10-07 11:31:50.686421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.603 [2024-10-07 11:31:50.686442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.603 [2024-10-07 11:31:50.686477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.603 [2024-10-07 11:31:50.686502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.603 [2024-10-07 11:31:50.687237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.603 [2024-10-07 11:31:50.687277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.603 [2024-10-07 11:31:50.687296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.603 [2024-10-07 11:31:50.687314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.603 [2024-10-07 11:31:50.687346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.603 [2024-10-07 11:31:50.687360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.603 [2024-10-07 11:31:50.687539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.603 [2024-10-07 11:31:50.687564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.603 [2024-10-07 11:31:50.697008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.603 [2024-10-07 11:31:50.697057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.603 [2024-10-07 11:31:50.697157] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.603 [2024-10-07 11:31:50.697188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.603 [2024-10-07 11:31:50.697205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.603 [2024-10-07 11:31:50.697255] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.603 [2024-10-07 11:31:50.697278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.603 [2024-10-07 11:31:50.697294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.603 [2024-10-07 11:31:50.697343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.603 [2024-10-07 11:31:50.697370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.603 [2024-10-07 11:31:50.697398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.603 [2024-10-07 11:31:50.697416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.603 [2024-10-07 11:31:50.697430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.603 [2024-10-07 11:31:50.697447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.603 [2024-10-07 11:31:50.697462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.603 [2024-10-07 11:31:50.697493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.603 [2024-10-07 11:31:50.697526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.603 [2024-10-07 11:31:50.697543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.603 [2024-10-07 11:31:50.707595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.603 [2024-10-07 11:31:50.707645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.603 [2024-10-07 11:31:50.707738] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.603 [2024-10-07 11:31:50.707769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.603 [2024-10-07 11:31:50.707786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.603 [2024-10-07 11:31:50.707834] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.603 [2024-10-07 11:31:50.707864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.603 [2024-10-07 11:31:50.707879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.603 [2024-10-07 11:31:50.707911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.603 [2024-10-07 11:31:50.707934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.603 [2024-10-07 11:31:50.707961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.603 [2024-10-07 11:31:50.707978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.603 [2024-10-07 11:31:50.707993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.603 [2024-10-07 11:31:50.708008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.603 [2024-10-07 11:31:50.708022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.603 [2024-10-07 11:31:50.708036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.603 [2024-10-07 11:31:50.708066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.603 [2024-10-07 11:31:50.708082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.603 [2024-10-07 11:31:50.717985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.603 [2024-10-07 11:31:50.718035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.603 [2024-10-07 11:31:50.718128] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.603 [2024-10-07 11:31:50.718158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.603 [2024-10-07 11:31:50.718175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.603 [2024-10-07 11:31:50.718223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.603 [2024-10-07 11:31:50.718246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.603 [2024-10-07 11:31:50.718261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.603 [2024-10-07 11:31:50.718562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.603 [2024-10-07 11:31:50.718596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.604 [2024-10-07 11:31:50.718748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.604 [2024-10-07 11:31:50.718774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.604 [2024-10-07 11:31:50.718789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.604 [2024-10-07 11:31:50.718807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.604 [2024-10-07 11:31:50.718821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.604 [2024-10-07 11:31:50.718834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.604 [2024-10-07 11:31:50.718940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.604 [2024-10-07 11:31:50.718961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.604 [2024-10-07 11:31:50.728169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.604 [2024-10-07 11:31:50.728250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.604 [2024-10-07 11:31:50.728371] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.604 [2024-10-07 11:31:50.728404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.604 [2024-10-07 11:31:50.728421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.604 [2024-10-07 11:31:50.728470] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.604 [2024-10-07 11:31:50.728494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.604 [2024-10-07 11:31:50.728510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.604 [2024-10-07 11:31:50.728543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.604 [2024-10-07 11:31:50.728568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.604 [2024-10-07 11:31:50.728595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.604 [2024-10-07 11:31:50.728613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.604 [2024-10-07 11:31:50.728629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.604 [2024-10-07 11:31:50.728645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.604 [2024-10-07 11:31:50.728660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.604 [2024-10-07 11:31:50.728675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.604 [2024-10-07 11:31:50.729440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.604 [2024-10-07 11:31:50.729468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.604 [2024-10-07 11:31:50.739003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.604 [2024-10-07 11:31:50.739048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.604 [2024-10-07 11:31:50.739140] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.604 [2024-10-07 11:31:50.739170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.604 [2024-10-07 11:31:50.739214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.604 [2024-10-07 11:31:50.739270] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.604 [2024-10-07 11:31:50.739294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.604 [2024-10-07 11:31:50.739310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.604 [2024-10-07 11:31:50.739364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.604 [2024-10-07 11:31:50.739388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.604 [2024-10-07 11:31:50.739416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.604 [2024-10-07 11:31:50.739434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.604 [2024-10-07 11:31:50.739449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.604 [2024-10-07 11:31:50.739465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.604 [2024-10-07 11:31:50.739479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.604 [2024-10-07 11:31:50.739493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.604 [2024-10-07 11:31:50.739522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.604 [2024-10-07 11:31:50.739539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.604 [2024-10-07 11:31:50.749629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.604 [2024-10-07 11:31:50.749679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.604 [2024-10-07 11:31:50.749773] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.604 [2024-10-07 11:31:50.749804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.604 [2024-10-07 11:31:50.749821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.604 [2024-10-07 11:31:50.749870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.604 [2024-10-07 11:31:50.749893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.604 [2024-10-07 11:31:50.749909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.604 [2024-10-07 11:31:50.749941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.604 [2024-10-07 11:31:50.749964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.604 [2024-10-07 11:31:50.749991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.604 [2024-10-07 11:31:50.750009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.604 [2024-10-07 11:31:50.750023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.604 [2024-10-07 11:31:50.750039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.604 [2024-10-07 11:31:50.750054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.604 [2024-10-07 11:31:50.750067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.604 [2024-10-07 11:31:50.750112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.604 [2024-10-07 11:31:50.750131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.604 [2024-10-07 11:31:50.759978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.604 [2024-10-07 11:31:50.760031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.604 [2024-10-07 11:31:50.760126] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.604 [2024-10-07 11:31:50.760157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.604 [2024-10-07 11:31:50.760175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.604 [2024-10-07 11:31:50.760225] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.604 [2024-10-07 11:31:50.760248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.604 [2024-10-07 11:31:50.760264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.604 [2024-10-07 11:31:50.760535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.604 [2024-10-07 11:31:50.760567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.604 [2024-10-07 11:31:50.760704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.604 [2024-10-07 11:31:50.760729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.604 [2024-10-07 11:31:50.760744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.604 [2024-10-07 11:31:50.760761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.604 [2024-10-07 11:31:50.760775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.604 [2024-10-07 11:31:50.760788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.604 [2024-10-07 11:31:50.760893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.604 [2024-10-07 11:31:50.760913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.604 [2024-10-07 11:31:50.770111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.604 [2024-10-07 11:31:50.770162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.604 [2024-10-07 11:31:50.770254] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.604 [2024-10-07 11:31:50.770296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.604 [2024-10-07 11:31:50.770331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.604 [2024-10-07 11:31:50.770387] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.604 [2024-10-07 11:31:50.770411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.604 [2024-10-07 11:31:50.770427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.604 [2024-10-07 11:31:50.770460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.604 [2024-10-07 11:31:50.770484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.604 [2024-10-07 11:31:50.770511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.604 [2024-10-07 11:31:50.770554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.604 [2024-10-07 11:31:50.770569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.604 [2024-10-07 11:31:50.770587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.604 [2024-10-07 11:31:50.770601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.604 [2024-10-07 11:31:50.770614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.604 [2024-10-07 11:31:50.771356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.604 [2024-10-07 11:31:50.771383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.604 [2024-10-07 11:31:50.780774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.604 [2024-10-07 11:31:50.780824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.604 [2024-10-07 11:31:50.780917] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.604 [2024-10-07 11:31:50.780948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.604 [2024-10-07 11:31:50.780966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.604 [2024-10-07 11:31:50.781016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.604 [2024-10-07 11:31:50.781039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.604 [2024-10-07 11:31:50.781054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.604 [2024-10-07 11:31:50.781086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.605 [2024-10-07 11:31:50.781109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.605 [2024-10-07 11:31:50.781136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.605 [2024-10-07 11:31:50.781154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.605 [2024-10-07 11:31:50.781168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.605 [2024-10-07 11:31:50.781184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.605 [2024-10-07 11:31:50.781198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.605 [2024-10-07 11:31:50.781211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.605 [2024-10-07 11:31:50.781241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.605 [2024-10-07 11:31:50.781258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.605 [2024-10-07 11:31:50.791356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.605 [2024-10-07 11:31:50.791404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.605 [2024-10-07 11:31:50.791497] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.605 [2024-10-07 11:31:50.791528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.605 [2024-10-07 11:31:50.791545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.605 [2024-10-07 11:31:50.791604] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.605 [2024-10-07 11:31:50.791635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.605 [2024-10-07 11:31:50.791652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.605 [2024-10-07 11:31:50.791684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.605 [2024-10-07 11:31:50.791708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.605 [2024-10-07 11:31:50.791735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.605 [2024-10-07 11:31:50.791752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.605 [2024-10-07 11:31:50.791767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.605 [2024-10-07 11:31:50.791783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.605 [2024-10-07 11:31:50.791797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.605 [2024-10-07 11:31:50.791810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.605 [2024-10-07 11:31:50.791839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.605 [2024-10-07 11:31:50.791857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.605 [2024-10-07 11:31:50.801774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.605 [2024-10-07 11:31:50.801826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.605 [2024-10-07 11:31:50.801919] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.605 [2024-10-07 11:31:50.801950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.605 [2024-10-07 11:31:50.801967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.605 [2024-10-07 11:31:50.802016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.605 [2024-10-07 11:31:50.802039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.605 [2024-10-07 11:31:50.802055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.605 [2024-10-07 11:31:50.802334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.605 [2024-10-07 11:31:50.802367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.605 [2024-10-07 11:31:50.802504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.605 [2024-10-07 11:31:50.802530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.605 [2024-10-07 11:31:50.802546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.605 [2024-10-07 11:31:50.802563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.605 [2024-10-07 11:31:50.802578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.605 [2024-10-07 11:31:50.802591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.605 [2024-10-07 11:31:50.802696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.605 [2024-10-07 11:31:50.802737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.605 [2024-10-07 11:31:50.811900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.605 [2024-10-07 11:31:50.811974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.605 [2024-10-07 11:31:50.812054] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.605 [2024-10-07 11:31:50.812083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.605 [2024-10-07 11:31:50.812100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.605 [2024-10-07 11:31:50.812164] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.605 [2024-10-07 11:31:50.812192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.605 [2024-10-07 11:31:50.812208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.605 [2024-10-07 11:31:50.812226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.605 [2024-10-07 11:31:50.812974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.605 [2024-10-07 11:31:50.813017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.605 [2024-10-07 11:31:50.813035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.605 [2024-10-07 11:31:50.813049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.605 [2024-10-07 11:31:50.813221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.605 [2024-10-07 11:31:50.813246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.605 [2024-10-07 11:31:50.813261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.605 [2024-10-07 11:31:50.813275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.605 [2024-10-07 11:31:50.813379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.605 [2024-10-07 11:31:50.822565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.605 [2024-10-07 11:31:50.822619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.605 [2024-10-07 11:31:50.822711] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.605 [2024-10-07 11:31:50.822742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.605 [2024-10-07 11:31:50.822760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.605 [2024-10-07 11:31:50.822808] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.605 [2024-10-07 11:31:50.822831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.605 [2024-10-07 11:31:50.822847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.605 [2024-10-07 11:31:50.822881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.605 [2024-10-07 11:31:50.822905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.605 [2024-10-07 11:31:50.822932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.605 [2024-10-07 11:31:50.822950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.605 [2024-10-07 11:31:50.822983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.605 [2024-10-07 11:31:50.823001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.605 [2024-10-07 11:31:50.823015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.605 [2024-10-07 11:31:50.823029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.605 [2024-10-07 11:31:50.823060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.605 [2024-10-07 11:31:50.823077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.605 [2024-10-07 11:31:50.833134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.605 [2024-10-07 11:31:50.833175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.605 [2024-10-07 11:31:50.833265] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.605 [2024-10-07 11:31:50.833296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.605 [2024-10-07 11:31:50.833313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.605 [2024-10-07 11:31:50.833381] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.605 [2024-10-07 11:31:50.833405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.605 [2024-10-07 11:31:50.833421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.605 [2024-10-07 11:31:50.833454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.605 [2024-10-07 11:31:50.833478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.605 [2024-10-07 11:31:50.833505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.605 [2024-10-07 11:31:50.833523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.605 [2024-10-07 11:31:50.833538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.605 [2024-10-07 11:31:50.833554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.605 [2024-10-07 11:31:50.833568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.605 [2024-10-07 11:31:50.833583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.605 [2024-10-07 11:31:50.833613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.605 [2024-10-07 11:31:50.833630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.605 [2024-10-07 11:31:50.843593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.605 [2024-10-07 11:31:50.843646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.605 [2024-10-07 11:31:50.843739] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.605 [2024-10-07 11:31:50.843770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.605 [2024-10-07 11:31:50.843788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.605 [2024-10-07 11:31:50.843835] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.605 [2024-10-07 11:31:50.843858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.605 [2024-10-07 11:31:50.843891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.605 [2024-10-07 11:31:50.844147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.605 [2024-10-07 11:31:50.844178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.605 [2024-10-07 11:31:50.844337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.605 [2024-10-07 11:31:50.844365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.605 [2024-10-07 11:31:50.844380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.605 [2024-10-07 11:31:50.844397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.605 [2024-10-07 11:31:50.844411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.605 [2024-10-07 11:31:50.844424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.605 [2024-10-07 11:31:50.844531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.605 [2024-10-07 11:31:50.844551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.605 [2024-10-07 11:31:50.853719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.605 [2024-10-07 11:31:50.853793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.605 [2024-10-07 11:31:50.853873] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.606 [2024-10-07 11:31:50.853902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.606 [2024-10-07 11:31:50.853919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.606 [2024-10-07 11:31:50.853983] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.606 [2024-10-07 11:31:50.854010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.606 [2024-10-07 11:31:50.854026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.606 [2024-10-07 11:31:50.854045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.606 [2024-10-07 11:31:50.854077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.606 [2024-10-07 11:31:50.854097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.606 [2024-10-07 11:31:50.854112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.606 [2024-10-07 11:31:50.854125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.606 [2024-10-07 11:31:50.854878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.606 [2024-10-07 11:31:50.854907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.606 [2024-10-07 11:31:50.854922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.606 [2024-10-07 11:31:50.854936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.606 [2024-10-07 11:31:50.855106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.606 [2024-10-07 11:31:50.864399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.606 [2024-10-07 11:31:50.864466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.606 [2024-10-07 11:31:50.864562] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.606 [2024-10-07 11:31:50.864594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.606 [2024-10-07 11:31:50.864611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.606 [2024-10-07 11:31:50.864660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.606 [2024-10-07 11:31:50.864683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.606 [2024-10-07 11:31:50.864699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.606 [2024-10-07 11:31:50.864731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.606 [2024-10-07 11:31:50.864754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.606 [2024-10-07 11:31:50.864781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.606 [2024-10-07 11:31:50.864799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.606 [2024-10-07 11:31:50.864813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.606 [2024-10-07 11:31:50.864830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.606 [2024-10-07 11:31:50.864844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.606 [2024-10-07 11:31:50.864857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.606 [2024-10-07 11:31:50.864887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.606 [2024-10-07 11:31:50.864904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.606 [2024-10-07 11:31:50.875078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.606 [2024-10-07 11:31:50.875128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.606 [2024-10-07 11:31:50.875220] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.606 [2024-10-07 11:31:50.875251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.606 [2024-10-07 11:31:50.875268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.606 [2024-10-07 11:31:50.875330] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.606 [2024-10-07 11:31:50.875356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.606 [2024-10-07 11:31:50.875372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.606 [2024-10-07 11:31:50.875405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.606 [2024-10-07 11:31:50.875429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.606 [2024-10-07 11:31:50.875456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.606 [2024-10-07 11:31:50.875474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.606 [2024-10-07 11:31:50.875488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.606 [2024-10-07 11:31:50.875520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.606 [2024-10-07 11:31:50.875536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.606 [2024-10-07 11:31:50.875549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.606 [2024-10-07 11:31:50.875581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.606 [2024-10-07 11:31:50.875599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.606 [2024-10-07 11:31:50.885464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.606 [2024-10-07 11:31:50.885514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.606 [2024-10-07 11:31:50.885606] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.606 [2024-10-07 11:31:50.885637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.606 [2024-10-07 11:31:50.885654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.606 [2024-10-07 11:31:50.885701] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.606 [2024-10-07 11:31:50.885724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.606 [2024-10-07 11:31:50.885740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.606 [2024-10-07 11:31:50.885992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.606 [2024-10-07 11:31:50.886023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.606 [2024-10-07 11:31:50.886174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.606 [2024-10-07 11:31:50.886199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.606 [2024-10-07 11:31:50.886214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.606 [2024-10-07 11:31:50.886231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.606 [2024-10-07 11:31:50.886246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.606 [2024-10-07 11:31:50.886259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.606 [2024-10-07 11:31:50.886391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.606 [2024-10-07 11:31:50.886415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.606 [2024-10-07 11:31:50.895635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.606 [2024-10-07 11:31:50.895683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.606 [2024-10-07 11:31:50.895775] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.606 [2024-10-07 11:31:50.895806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.606 [2024-10-07 11:31:50.895824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.606 [2024-10-07 11:31:50.895872] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.606 [2024-10-07 11:31:50.895895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.606 [2024-10-07 11:31:50.895910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.606 [2024-10-07 11:31:50.895960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.606 [2024-10-07 11:31:50.895983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.606 [2024-10-07 11:31:50.896010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.606 [2024-10-07 11:31:50.896028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.606 [2024-10-07 11:31:50.896042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.606 [2024-10-07 11:31:50.896059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.606 [2024-10-07 11:31:50.896073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.606 [2024-10-07 11:31:50.896086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.606 [2024-10-07 11:31:50.896827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.606 [2024-10-07 11:31:50.896855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.606 [2024-10-07 11:31:50.906398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.606 [2024-10-07 11:31:50.906447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.606 [2024-10-07 11:31:50.906540] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.606 [2024-10-07 11:31:50.906571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.606 [2024-10-07 11:31:50.906588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.606 [2024-10-07 11:31:50.906635] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.606 [2024-10-07 11:31:50.906659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.606 [2024-10-07 11:31:50.906674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.606 [2024-10-07 11:31:50.906706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.606 [2024-10-07 11:31:50.906729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.606 [2024-10-07 11:31:50.906756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.606 [2024-10-07 11:31:50.906774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.606 [2024-10-07 11:31:50.906788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.606 [2024-10-07 11:31:50.906804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.606 [2024-10-07 11:31:50.906818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.606 [2024-10-07 11:31:50.906831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.606 [2024-10-07 11:31:50.906860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.606 [2024-10-07 11:31:50.906878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.606 [2024-10-07 11:31:50.916987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.606 [2024-10-07 11:31:50.917038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.606 [2024-10-07 11:31:50.917159] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.606 [2024-10-07 11:31:50.917190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.606 [2024-10-07 11:31:50.917208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.606 [2024-10-07 11:31:50.917256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.606 [2024-10-07 11:31:50.917279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.606 [2024-10-07 11:31:50.917295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.606 [2024-10-07 11:31:50.917341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.606 [2024-10-07 11:31:50.917367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.606 [2024-10-07 11:31:50.917394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.606 [2024-10-07 11:31:50.917412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.606 [2024-10-07 11:31:50.917427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.606 [2024-10-07 11:31:50.917444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.606 [2024-10-07 11:31:50.917458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.606 [2024-10-07 11:31:50.917471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.606 [2024-10-07 11:31:50.917501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.606 [2024-10-07 11:31:50.917518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.607 [2024-10-07 11:31:50.927504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.607 [2024-10-07 11:31:50.927554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.607 [2024-10-07 11:31:50.927647] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.607 [2024-10-07 11:31:50.927678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.607 [2024-10-07 11:31:50.927694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.607 [2024-10-07 11:31:50.927742] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.607 [2024-10-07 11:31:50.927766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.607 [2024-10-07 11:31:50.927781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.607 [2024-10-07 11:31:50.928036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.607 [2024-10-07 11:31:50.928068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.607 [2024-10-07 11:31:50.928202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.607 [2024-10-07 11:31:50.928227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.607 [2024-10-07 11:31:50.928242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.607 [2024-10-07 11:31:50.928259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.607 [2024-10-07 11:31:50.928290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.607 [2024-10-07 11:31:50.928305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.607 [2024-10-07 11:31:50.928429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.607 [2024-10-07 11:31:50.928451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.607 [2024-10-07 11:31:50.937627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.607 [2024-10-07 11:31:50.937700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.607 [2024-10-07 11:31:50.937779] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.607 [2024-10-07 11:31:50.937808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.607 [2024-10-07 11:31:50.937824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.607 [2024-10-07 11:31:50.937888] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.607 [2024-10-07 11:31:50.937915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.607 [2024-10-07 11:31:50.937931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.607 [2024-10-07 11:31:50.937949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.607 [2024-10-07 11:31:50.938702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.607 [2024-10-07 11:31:50.938745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.607 [2024-10-07 11:31:50.938763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.607 [2024-10-07 11:31:50.938777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.607 [2024-10-07 11:31:50.938949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.607 [2024-10-07 11:31:50.938975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.607 [2024-10-07 11:31:50.938989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.607 [2024-10-07 11:31:50.939003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.607 [2024-10-07 11:31:50.939111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.607 [2024-10-07 11:31:50.948214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.607 [2024-10-07 11:31:50.948263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.607 [2024-10-07 11:31:50.948371] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.607 [2024-10-07 11:31:50.948402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.607 [2024-10-07 11:31:50.948420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.607 [2024-10-07 11:31:50.948468] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.607 [2024-10-07 11:31:50.948490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.607 [2024-10-07 11:31:50.948506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.607 [2024-10-07 11:31:50.948538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.607 [2024-10-07 11:31:50.948580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.607 [2024-10-07 11:31:50.948609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.607 [2024-10-07 11:31:50.948628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.607 [2024-10-07 11:31:50.948642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.607 [2024-10-07 11:31:50.948658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.607 [2024-10-07 11:31:50.948672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.607 [2024-10-07 11:31:50.948685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.607 [2024-10-07 11:31:50.948716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.607 [2024-10-07 11:31:50.948739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.607 [2024-10-07 11:31:50.958752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.607 [2024-10-07 11:31:50.958802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.607 [2024-10-07 11:31:50.958895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.607 [2024-10-07 11:31:50.958925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.607 [2024-10-07 11:31:50.958943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.607 [2024-10-07 11:31:50.958990] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.607 [2024-10-07 11:31:50.959013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.607 [2024-10-07 11:31:50.959029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.607 [2024-10-07 11:31:50.959060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.607 [2024-10-07 11:31:50.959084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.607 [2024-10-07 11:31:50.959110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.607 [2024-10-07 11:31:50.959128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.607 [2024-10-07 11:31:50.959142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.607 [2024-10-07 11:31:50.959158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.607 [2024-10-07 11:31:50.959172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.607 [2024-10-07 11:31:50.959185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.607 [2024-10-07 11:31:50.959215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.607 [2024-10-07 11:31:50.959232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.607 [2024-10-07 11:31:50.969058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.607 [2024-10-07 11:31:50.969109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.607 [2024-10-07 11:31:50.969217] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.607 [2024-10-07 11:31:50.969264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.607 [2024-10-07 11:31:50.969282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.607 [2024-10-07 11:31:50.969369] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.607 [2024-10-07 11:31:50.969396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.607 [2024-10-07 11:31:50.969411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.607 [2024-10-07 11:31:50.969665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.607 [2024-10-07 11:31:50.969697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.607 [2024-10-07 11:31:50.969834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.607 [2024-10-07 11:31:50.969860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.607 [2024-10-07 11:31:50.969875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.607 [2024-10-07 11:31:50.969891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.607 [2024-10-07 11:31:50.969905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.607 [2024-10-07 11:31:50.969918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.607 [2024-10-07 11:31:50.970024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.607 [2024-10-07 11:31:50.970045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.607 [2024-10-07 11:31:50.979181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.607 [2024-10-07 11:31:50.979254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.607 [2024-10-07 11:31:50.979345] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.607 [2024-10-07 11:31:50.979375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.607 [2024-10-07 11:31:50.979392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.607 [2024-10-07 11:31:50.979458] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.607 [2024-10-07 11:31:50.979485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.607 [2024-10-07 11:31:50.979501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.607 [2024-10-07 11:31:50.979519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.607 [2024-10-07 11:31:50.979550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.607 [2024-10-07 11:31:50.979572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.607 [2024-10-07 11:31:50.979586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.607 [2024-10-07 11:31:50.979599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.607 [2024-10-07 11:31:50.980338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.608 [2024-10-07 11:31:50.980365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.608 [2024-10-07 11:31:50.980398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.608 [2024-10-07 11:31:50.980413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.608 [2024-10-07 11:31:50.980584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.608 [2024-10-07 11:31:50.989794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.608 [2024-10-07 11:31:50.989845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.608 [2024-10-07 11:31:50.989937] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.608 [2024-10-07 11:31:50.989967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.608 [2024-10-07 11:31:50.989985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.608 [2024-10-07 11:31:50.990033] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.608 [2024-10-07 11:31:50.990056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.608 [2024-10-07 11:31:50.990072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.608 [2024-10-07 11:31:50.990104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.608 [2024-10-07 11:31:50.990126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.608 [2024-10-07 11:31:50.990153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.608 [2024-10-07 11:31:50.990171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.608 [2024-10-07 11:31:50.990185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.608 [2024-10-07 11:31:50.990202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.608 [2024-10-07 11:31:50.990217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.608 [2024-10-07 11:31:50.990230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.608 [2024-10-07 11:31:50.990259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.608 [2024-10-07 11:31:50.990277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.608 [2024-10-07 11:31:51.000247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.608 [2024-10-07 11:31:51.000297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.608 [2024-10-07 11:31:51.000403] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.608 [2024-10-07 11:31:51.000435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.608 [2024-10-07 11:31:51.000452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.608 [2024-10-07 11:31:51.000499] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.608 [2024-10-07 11:31:51.000522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.608 [2024-10-07 11:31:51.000538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.608 [2024-10-07 11:31:51.000569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.608 [2024-10-07 11:31:51.000592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.608 [2024-10-07 11:31:51.000640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.608 [2024-10-07 11:31:51.000659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.608 [2024-10-07 11:31:51.000673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.608 [2024-10-07 11:31:51.000690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.608 [2024-10-07 11:31:51.000704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.608 [2024-10-07 11:31:51.000717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.608 [2024-10-07 11:31:51.000748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.608 [2024-10-07 11:31:51.000765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.608 [2024-10-07 11:31:51.010502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.608 [2024-10-07 11:31:51.010552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.608 [2024-10-07 11:31:51.010644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.608 [2024-10-07 11:31:51.010675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.608 [2024-10-07 11:31:51.010692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.608 [2024-10-07 11:31:51.010740] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.608 [2024-10-07 11:31:51.010762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.608 [2024-10-07 11:31:51.010778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.608 [2024-10-07 11:31:51.011030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.608 [2024-10-07 11:31:51.011061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.608 [2024-10-07 11:31:51.011196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.608 [2024-10-07 11:31:51.011222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.608 [2024-10-07 11:31:51.011237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.608 [2024-10-07 11:31:51.011254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.608 [2024-10-07 11:31:51.011269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.608 [2024-10-07 11:31:51.011282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.608 [2024-10-07 11:31:51.011401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.608 [2024-10-07 11:31:51.011424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.608 [2024-10-07 11:31:51.020623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.608 [2024-10-07 11:31:51.020698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.608 [2024-10-07 11:31:51.020778] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.608 [2024-10-07 11:31:51.020806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.608 [2024-10-07 11:31:51.020842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.608 [2024-10-07 11:31:51.020913] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.608 [2024-10-07 11:31:51.020940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.608 [2024-10-07 11:31:51.020956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.608 [2024-10-07 11:31:51.020974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.608 [2024-10-07 11:31:51.021719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.608 [2024-10-07 11:31:51.021762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.608 [2024-10-07 11:31:51.021780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.608 [2024-10-07 11:31:51.021794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.608 [2024-10-07 11:31:51.021966] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.608 [2024-10-07 11:31:51.021991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.608 [2024-10-07 11:31:51.022005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.608 [2024-10-07 11:31:51.022019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.608 [2024-10-07 11:31:51.022109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.608 [2024-10-07 11:31:51.031117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.608 [2024-10-07 11:31:51.031168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.608 [2024-10-07 11:31:51.031260] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.608 [2024-10-07 11:31:51.031291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.608 [2024-10-07 11:31:51.031308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.608 [2024-10-07 11:31:51.031373] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.608 [2024-10-07 11:31:51.031397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.608 [2024-10-07 11:31:51.031412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.608 [2024-10-07 11:31:51.031444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.608 [2024-10-07 11:31:51.031467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.608 [2024-10-07 11:31:51.031494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.608 [2024-10-07 11:31:51.031511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.608 [2024-10-07 11:31:51.031526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.608 [2024-10-07 11:31:51.031542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.608 [2024-10-07 11:31:51.031556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.608 [2024-10-07 11:31:51.031569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.608 [2024-10-07 11:31:51.031614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.608 [2024-10-07 11:31:51.031633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.608 [2024-10-07 11:31:51.041574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.608 [2024-10-07 11:31:51.041623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.608 [2024-10-07 11:31:51.041715] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.608 [2024-10-07 11:31:51.041745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.608 [2024-10-07 11:31:51.041762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.608 [2024-10-07 11:31:51.041810] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.608 [2024-10-07 11:31:51.041833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.608 [2024-10-07 11:31:51.041848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.608 [2024-10-07 11:31:51.041880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.608 [2024-10-07 11:31:51.041903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.608 [2024-10-07 11:31:51.041930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.608 [2024-10-07 11:31:51.041948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.608 [2024-10-07 11:31:51.041964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.608 [2024-10-07 11:31:51.041980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.608 [2024-10-07 11:31:51.041994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.608 [2024-10-07 11:31:51.042007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.608 [2024-10-07 11:31:51.042037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.608 [2024-10-07 11:31:51.042054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.608 [2024-10-07 11:31:51.051800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.608 [2024-10-07 11:31:51.051850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.608 [2024-10-07 11:31:51.051942] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.608 [2024-10-07 11:31:51.051973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.608 [2024-10-07 11:31:51.051989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.608 [2024-10-07 11:31:51.052037] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.608 [2024-10-07 11:31:51.052061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.608 [2024-10-07 11:31:51.052076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.608 [2024-10-07 11:31:51.052342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.608 [2024-10-07 11:31:51.052375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.609 [2024-10-07 11:31:51.052512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.609 [2024-10-07 11:31:51.052554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.609 [2024-10-07 11:31:51.052569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.609 [2024-10-07 11:31:51.052587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.609 [2024-10-07 11:31:51.052602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.609 [2024-10-07 11:31:51.052615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.609 [2024-10-07 11:31:51.052721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.609 [2024-10-07 11:31:51.052742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.609 [2024-10-07 11:31:51.061921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.609 [2024-10-07 11:31:51.062010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.609 [2024-10-07 11:31:51.062088] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.609 [2024-10-07 11:31:51.062117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.609 [2024-10-07 11:31:51.062134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.609 [2024-10-07 11:31:51.062198] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.609 [2024-10-07 11:31:51.062225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.609 [2024-10-07 11:31:51.062241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.609 [2024-10-07 11:31:51.062260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.609 [2024-10-07 11:31:51.063018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.609 [2024-10-07 11:31:51.063063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.609 [2024-10-07 11:31:51.063082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.609 [2024-10-07 11:31:51.063097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.609 [2024-10-07 11:31:51.063287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.609 [2024-10-07 11:31:51.063313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.609 [2024-10-07 11:31:51.063344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.609 [2024-10-07 11:31:51.063359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.609 [2024-10-07 11:31:51.063450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.609 [2024-10-07 11:31:51.072429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.609 [2024-10-07 11:31:51.072479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.609 [2024-10-07 11:31:51.072573] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.609 [2024-10-07 11:31:51.072604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.609 [2024-10-07 11:31:51.072622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.609 [2024-10-07 11:31:51.072694] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.609 [2024-10-07 11:31:51.072719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.609 [2024-10-07 11:31:51.072735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.609 [2024-10-07 11:31:51.072769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.609 [2024-10-07 11:31:51.072792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.609 [2024-10-07 11:31:51.072819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.609 [2024-10-07 11:31:51.072837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.609 [2024-10-07 11:31:51.072852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.609 [2024-10-07 11:31:51.072869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.609 [2024-10-07 11:31:51.072883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.609 [2024-10-07 11:31:51.072897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.609 [2024-10-07 11:31:51.072927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.609 [2024-10-07 11:31:51.072944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.609 [2024-10-07 11:31:51.082916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.609 [2024-10-07 11:31:51.082966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.609 [2024-10-07 11:31:51.083058] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.609 [2024-10-07 11:31:51.083088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.609 [2024-10-07 11:31:51.083106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.609 [2024-10-07 11:31:51.083154] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.609 [2024-10-07 11:31:51.083177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.609 [2024-10-07 11:31:51.083192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.609 [2024-10-07 11:31:51.083224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.609 [2024-10-07 11:31:51.083248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.609 [2024-10-07 11:31:51.083274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.609 [2024-10-07 11:31:51.083292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.609 [2024-10-07 11:31:51.083306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.609 [2024-10-07 11:31:51.083337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.609 [2024-10-07 11:31:51.083354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.609 [2024-10-07 11:31:51.083369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.609 [2024-10-07 11:31:51.083400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.609 [2024-10-07 11:31:51.083432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.609 [2024-10-07 11:31:51.093201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.609 [2024-10-07 11:31:51.093251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.609 [2024-10-07 11:31:51.093356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.609 [2024-10-07 11:31:51.093388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.609 [2024-10-07 11:31:51.093406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.609 [2024-10-07 11:31:51.093454] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.609 [2024-10-07 11:31:51.093477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.609 [2024-10-07 11:31:51.093493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.609 [2024-10-07 11:31:51.093746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.609 [2024-10-07 11:31:51.093777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.609 [2024-10-07 11:31:51.093913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.609 [2024-10-07 11:31:51.093939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.609 [2024-10-07 11:31:51.093955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.609 [2024-10-07 11:31:51.093972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.609 [2024-10-07 11:31:51.093987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.609 [2024-10-07 11:31:51.094000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.609 [2024-10-07 11:31:51.094105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.609 [2024-10-07 11:31:51.094125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.609 [2024-10-07 11:31:51.103343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.609 [2024-10-07 11:31:51.103415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.609 [2024-10-07 11:31:51.103494] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.609 [2024-10-07 11:31:51.103522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.609 [2024-10-07 11:31:51.103538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.609 [2024-10-07 11:31:51.103602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.609 [2024-10-07 11:31:51.103629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.609 [2024-10-07 11:31:51.103645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.609 [2024-10-07 11:31:51.103665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.609 [2024-10-07 11:31:51.104414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.609 [2024-10-07 11:31:51.104455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.609 [2024-10-07 11:31:51.104472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.609 [2024-10-07 11:31:51.104508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.609 [2024-10-07 11:31:51.104682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.609 [2024-10-07 11:31:51.104707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.609 [2024-10-07 11:31:51.104721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.609 [2024-10-07 11:31:51.104736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.609 [2024-10-07 11:31:51.104846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.609 [2024-10-07 11:31:51.113961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.609 [2024-10-07 11:31:51.114013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.609 [2024-10-07 11:31:51.114108] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.609 [2024-10-07 11:31:51.114138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.609 [2024-10-07 11:31:51.114155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.609 [2024-10-07 11:31:51.114207] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.609 [2024-10-07 11:31:51.114230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.609 [2024-10-07 11:31:51.114246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.609 [2024-10-07 11:31:51.114278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.609 [2024-10-07 11:31:51.114327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.609 [2024-10-07 11:31:51.114360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.609 [2024-10-07 11:31:51.114378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.609 [2024-10-07 11:31:51.114393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.609 [2024-10-07 11:31:51.114409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.609 [2024-10-07 11:31:51.114423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.609 [2024-10-07 11:31:51.114436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.609 [2024-10-07 11:31:51.114466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.609 [2024-10-07 11:31:51.114483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.609 [2024-10-07 11:31:51.124486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.609 [2024-10-07 11:31:51.124537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.609 [2024-10-07 11:31:51.124631] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.609 [2024-10-07 11:31:51.124661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.609 [2024-10-07 11:31:51.124678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.609 [2024-10-07 11:31:51.124727] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.609 [2024-10-07 11:31:51.124750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.610 [2024-10-07 11:31:51.124786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.610 [2024-10-07 11:31:51.124820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.610 [2024-10-07 11:31:51.124844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.610 [2024-10-07 11:31:51.124871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.610 [2024-10-07 11:31:51.124889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.610 [2024-10-07 11:31:51.124903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.610 [2024-10-07 11:31:51.124920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.610 [2024-10-07 11:31:51.124934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.610 [2024-10-07 11:31:51.124947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.610 [2024-10-07 11:31:51.124977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.610 [2024-10-07 11:31:51.124996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.610 [2024-10-07 11:31:51.135593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.610 [2024-10-07 11:31:51.135653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.610 [2024-10-07 11:31:51.135762] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.610 [2024-10-07 11:31:51.135795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.610 [2024-10-07 11:31:51.135813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.610 [2024-10-07 11:31:51.135862] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.610 [2024-10-07 11:31:51.135895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.610 [2024-10-07 11:31:51.135911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.610 [2024-10-07 11:31:51.135944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.610 [2024-10-07 11:31:51.135967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.610 [2024-10-07 11:31:51.135995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.610 [2024-10-07 11:31:51.136013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.610 [2024-10-07 11:31:51.136027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.610 [2024-10-07 11:31:51.136044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.610 [2024-10-07 11:31:51.136058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.610 [2024-10-07 11:31:51.136071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.610 [2024-10-07 11:31:51.136101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.610 [2024-10-07 11:31:51.136119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.610 [2024-10-07 11:31:51.146477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.610 [2024-10-07 11:31:51.146570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.610 [2024-10-07 11:31:51.146706] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.610 [2024-10-07 11:31:51.146749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.610 [2024-10-07 11:31:51.146776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.610 [2024-10-07 11:31:51.146859] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.610 [2024-10-07 11:31:51.146893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.610 [2024-10-07 11:31:51.146919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.610 [2024-10-07 11:31:51.148543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.610 [2024-10-07 11:31:51.148613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.610 [2024-10-07 11:31:51.149828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.610 [2024-10-07 11:31:51.149890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.610 [2024-10-07 11:31:51.149919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.610 [2024-10-07 11:31:51.149946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.610 [2024-10-07 11:31:51.149970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.610 [2024-10-07 11:31:51.149991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.610 [2024-10-07 11:31:51.151804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.610 [2024-10-07 11:31:51.151855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.610 [2024-10-07 11:31:51.158617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.610 [2024-10-07 11:31:51.158670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.610 [2024-10-07 11:31:51.158844] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.610 [2024-10-07 11:31:51.158878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.610 [2024-10-07 11:31:51.158896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.610 [2024-10-07 11:31:51.158944] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.610 [2024-10-07 11:31:51.158968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.610 [2024-10-07 11:31:51.158984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.610 [2024-10-07 11:31:51.159017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.610 [2024-10-07 11:31:51.159041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.610 [2024-10-07 11:31:51.159068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.610 [2024-10-07 11:31:51.159086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.610 [2024-10-07 11:31:51.159101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.610 [2024-10-07 11:31:51.159136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.610 [2024-10-07 11:31:51.159153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.610 [2024-10-07 11:31:51.159167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.610 [2024-10-07 11:31:51.159909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.610 [2024-10-07 11:31:51.159948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.610 [2024-10-07 11:31:51.169375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.610 [2024-10-07 11:31:51.169425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.610 [2024-10-07 11:31:51.169522] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.610 [2024-10-07 11:31:51.169552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.610 [2024-10-07 11:31:51.169570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.610 [2024-10-07 11:31:51.169617] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.610 [2024-10-07 11:31:51.169640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.610 [2024-10-07 11:31:51.169656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.610 [2024-10-07 11:31:51.169688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.610 [2024-10-07 11:31:51.169711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.610 [2024-10-07 11:31:51.169738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.610 [2024-10-07 11:31:51.169756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.610 [2024-10-07 11:31:51.169770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.610 [2024-10-07 11:31:51.169787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.610 [2024-10-07 11:31:51.169801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.610 [2024-10-07 11:31:51.169814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.610 [2024-10-07 11:31:51.169843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.610 [2024-10-07 11:31:51.169860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.610 [2024-10-07 11:31:51.179998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.610 [2024-10-07 11:31:51.180050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.610 [2024-10-07 11:31:51.180145] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.610 [2024-10-07 11:31:51.180175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.610 [2024-10-07 11:31:51.180193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.610 [2024-10-07 11:31:51.180240] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.610 [2024-10-07 11:31:51.180263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.610 [2024-10-07 11:31:51.180295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.610 [2024-10-07 11:31:51.180348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.610 [2024-10-07 11:31:51.180375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.610 [2024-10-07 11:31:51.180407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.610 [2024-10-07 11:31:51.180425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.610 [2024-10-07 11:31:51.180439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.610 [2024-10-07 11:31:51.180455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.610 [2024-10-07 11:31:51.180469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.610 [2024-10-07 11:31:51.180483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.610 [2024-10-07 11:31:51.180513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.610 [2024-10-07 11:31:51.180530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.610 [2024-10-07 11:31:51.190348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.610 [2024-10-07 11:31:51.190399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.610 [2024-10-07 11:31:51.190504] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.610 [2024-10-07 11:31:51.190536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.610 [2024-10-07 11:31:51.190554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.610 [2024-10-07 11:31:51.190611] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.610 [2024-10-07 11:31:51.190634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.610 [2024-10-07 11:31:51.190650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.610 [2024-10-07 11:31:51.190903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.610 [2024-10-07 11:31:51.190935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.610 [2024-10-07 11:31:51.191072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.610 [2024-10-07 11:31:51.191097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.610 [2024-10-07 11:31:51.191112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.611 [2024-10-07 11:31:51.191130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.611 [2024-10-07 11:31:51.191144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.611 [2024-10-07 11:31:51.191157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.611 [2024-10-07 11:31:51.191263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.611 [2024-10-07 11:31:51.191283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.611 [2024-10-07 11:31:51.200479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.611 [2024-10-07 11:31:51.200542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.611 [2024-10-07 11:31:51.200665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.611 [2024-10-07 11:31:51.200696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.611 [2024-10-07 11:31:51.200713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.611 [2024-10-07 11:31:51.200764] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.611 [2024-10-07 11:31:51.200787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.611 [2024-10-07 11:31:51.200803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.611 [2024-10-07 11:31:51.200835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.611 [2024-10-07 11:31:51.200858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.611 [2024-10-07 11:31:51.200885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.611 [2024-10-07 11:31:51.200903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.611 [2024-10-07 11:31:51.200917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.611 [2024-10-07 11:31:51.200933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.611 [2024-10-07 11:31:51.200947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.611 [2024-10-07 11:31:51.200960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.611 [2024-10-07 11:31:51.201702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.611 [2024-10-07 11:31:51.201729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.611 [2024-10-07 11:31:51.211221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.611 [2024-10-07 11:31:51.211272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.611 [2024-10-07 11:31:51.211380] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.611 [2024-10-07 11:31:51.211412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.611 [2024-10-07 11:31:51.211430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.611 [2024-10-07 11:31:51.211478] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.611 [2024-10-07 11:31:51.211502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.611 [2024-10-07 11:31:51.211517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.611 [2024-10-07 11:31:51.211549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.611 [2024-10-07 11:31:51.211573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.611 [2024-10-07 11:31:51.211600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.611 [2024-10-07 11:31:51.211618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.611 [2024-10-07 11:31:51.211632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.611 [2024-10-07 11:31:51.211648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.611 [2024-10-07 11:31:51.211678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.611 [2024-10-07 11:31:51.211692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.611 [2024-10-07 11:31:51.211724] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.611 [2024-10-07 11:31:51.211742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.611 [2024-10-07 11:31:51.221783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.611 [2024-10-07 11:31:51.221833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.611 [2024-10-07 11:31:51.221929] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.611 [2024-10-07 11:31:51.221960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.611 [2024-10-07 11:31:51.221978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.611 [2024-10-07 11:31:51.222026] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.611 [2024-10-07 11:31:51.222049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.611 [2024-10-07 11:31:51.222065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.611 [2024-10-07 11:31:51.222097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.611 [2024-10-07 11:31:51.222120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.611 [2024-10-07 11:31:51.222147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.611 [2024-10-07 11:31:51.222165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.611 [2024-10-07 11:31:51.222179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.611 [2024-10-07 11:31:51.222196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.611 [2024-10-07 11:31:51.222210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.611 [2024-10-07 11:31:51.222223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.611 [2024-10-07 11:31:51.222253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.611 [2024-10-07 11:31:51.222270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.611 [2024-10-07 11:31:51.232099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.611 [2024-10-07 11:31:51.232150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.611 [2024-10-07 11:31:51.232243] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.611 [2024-10-07 11:31:51.232274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.611 [2024-10-07 11:31:51.232291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.611 [2024-10-07 11:31:51.232357] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.611 [2024-10-07 11:31:51.232382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.611 [2024-10-07 11:31:51.232399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.611 [2024-10-07 11:31:51.232672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.611 [2024-10-07 11:31:51.232705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.611 [2024-10-07 11:31:51.232842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.611 [2024-10-07 11:31:51.232867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.611 [2024-10-07 11:31:51.232882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.611 [2024-10-07 11:31:51.232900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.611 [2024-10-07 11:31:51.232914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.611 [2024-10-07 11:31:51.232927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.611 [2024-10-07 11:31:51.233032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.611 [2024-10-07 11:31:51.233052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.611 [2024-10-07 11:31:51.242225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.611 [2024-10-07 11:31:51.242309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.611 [2024-10-07 11:31:51.242404] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.611 [2024-10-07 11:31:51.242433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.611 [2024-10-07 11:31:51.242450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.611 [2024-10-07 11:31:51.242516] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.611 [2024-10-07 11:31:51.242543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.611 [2024-10-07 11:31:51.242560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.611 [2024-10-07 11:31:51.242586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.611 [2024-10-07 11:31:51.242620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.611 [2024-10-07 11:31:51.242641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.611 [2024-10-07 11:31:51.242655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.611 [2024-10-07 11:31:51.242668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.611 [2024-10-07 11:31:51.243405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.611 [2024-10-07 11:31:51.243434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.611 [2024-10-07 11:31:51.243449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.611 [2024-10-07 11:31:51.243463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.611 [2024-10-07 11:31:51.243651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.611 [2024-10-07 11:31:51.252950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.611 [2024-10-07 11:31:51.253000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.611 [2024-10-07 11:31:51.253093] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.611 [2024-10-07 11:31:51.253141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.611 [2024-10-07 11:31:51.253160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.611 [2024-10-07 11:31:51.253210] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.611 [2024-10-07 11:31:51.253233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.611 [2024-10-07 11:31:51.253248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.611 [2024-10-07 11:31:51.253281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.611 [2024-10-07 11:31:51.253304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.611 [2024-10-07 11:31:51.253348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.611 [2024-10-07 11:31:51.253368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.611 [2024-10-07 11:31:51.253382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.611 [2024-10-07 11:31:51.253399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.611 [2024-10-07 11:31:51.253413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.611 [2024-10-07 11:31:51.253426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.611 [2024-10-07 11:31:51.253456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.611 [2024-10-07 11:31:51.253473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.611 [2024-10-07 11:31:51.263600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.611 [2024-10-07 11:31:51.263675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.611 [2024-10-07 11:31:51.263788] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.611 [2024-10-07 11:31:51.263828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.611 [2024-10-07 11:31:51.263846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.611 [2024-10-07 11:31:51.263895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.611 [2024-10-07 11:31:51.263919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.612 [2024-10-07 11:31:51.263935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.612 [2024-10-07 11:31:51.263969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.612 [2024-10-07 11:31:51.263993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.612 [2024-10-07 11:31:51.264021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.612 [2024-10-07 11:31:51.264039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.612 [2024-10-07 11:31:51.264054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.612 [2024-10-07 11:31:51.264072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.612 [2024-10-07 11:31:51.264086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.612 [2024-10-07 11:31:51.264120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.612 [2024-10-07 11:31:51.264171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.612 [2024-10-07 11:31:51.264193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.612 [2024-10-07 11:31:51.274007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.612 [2024-10-07 11:31:51.274057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.612 [2024-10-07 11:31:51.274151] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.612 [2024-10-07 11:31:51.274182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.612 [2024-10-07 11:31:51.274200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.612 [2024-10-07 11:31:51.274249] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.612 [2024-10-07 11:31:51.274271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.612 [2024-10-07 11:31:51.274300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.612 [2024-10-07 11:31:51.274578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.612 [2024-10-07 11:31:51.274610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.612 [2024-10-07 11:31:51.274746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.612 [2024-10-07 11:31:51.274772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.612 [2024-10-07 11:31:51.274788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.612 [2024-10-07 11:31:51.274804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.612 [2024-10-07 11:31:51.274819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.612 [2024-10-07 11:31:51.274832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.612 [2024-10-07 11:31:51.274938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.612 [2024-10-07 11:31:51.274958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.612 [2024-10-07 11:31:51.284134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.612 [2024-10-07 11:31:51.284210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.612 [2024-10-07 11:31:51.284290] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.612 [2024-10-07 11:31:51.284334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.612 [2024-10-07 11:31:51.284354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.612 [2024-10-07 11:31:51.284421] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.612 [2024-10-07 11:31:51.284448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.612 [2024-10-07 11:31:51.284464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.612 [2024-10-07 11:31:51.284483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.612 [2024-10-07 11:31:51.285230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.612 [2024-10-07 11:31:51.285272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.612 [2024-10-07 11:31:51.285290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.612 [2024-10-07 11:31:51.285304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.612 [2024-10-07 11:31:51.285489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.612 [2024-10-07 11:31:51.285516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.612 [2024-10-07 11:31:51.285531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.612 [2024-10-07 11:31:51.285544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.612 [2024-10-07 11:31:51.285653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.612 [2024-10-07 11:31:51.294758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.612 [2024-10-07 11:31:51.294809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.612 [2024-10-07 11:31:51.294903] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.612 [2024-10-07 11:31:51.294935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.612 [2024-10-07 11:31:51.294952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.612 [2024-10-07 11:31:51.294999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.612 [2024-10-07 11:31:51.295022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.612 [2024-10-07 11:31:51.295037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.612 [2024-10-07 11:31:51.295069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.612 [2024-10-07 11:31:51.295092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.612 [2024-10-07 11:31:51.295119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.612 [2024-10-07 11:31:51.295137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.612 [2024-10-07 11:31:51.295151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.612 [2024-10-07 11:31:51.295168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.612 [2024-10-07 11:31:51.295182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.612 [2024-10-07 11:31:51.295195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.612 [2024-10-07 11:31:51.295225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.612 [2024-10-07 11:31:51.295242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.612 [2024-10-07 11:31:51.305289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.612 [2024-10-07 11:31:51.305354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.612 [2024-10-07 11:31:51.305449] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.612 [2024-10-07 11:31:51.305480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.612 [2024-10-07 11:31:51.305515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.612 [2024-10-07 11:31:51.305569] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.612 [2024-10-07 11:31:51.305593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.612 [2024-10-07 11:31:51.305608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.612 [2024-10-07 11:31:51.305640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.612 [2024-10-07 11:31:51.305664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.612 [2024-10-07 11:31:51.305693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.612 [2024-10-07 11:31:51.305711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.612 [2024-10-07 11:31:51.305725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.612 [2024-10-07 11:31:51.305741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.612 [2024-10-07 11:31:51.305755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.612 [2024-10-07 11:31:51.305768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.612 [2024-10-07 11:31:51.305798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.612 [2024-10-07 11:31:51.305815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.612 [2024-10-07 11:31:51.315618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.612 [2024-10-07 11:31:51.315669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.612 [2024-10-07 11:31:51.315999] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.612 [2024-10-07 11:31:51.316032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.612 [2024-10-07 11:31:51.316050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.612 [2024-10-07 11:31:51.316098] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.612 [2024-10-07 11:31:51.316121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.612 [2024-10-07 11:31:51.316137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.612 [2024-10-07 11:31:51.316273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.612 [2024-10-07 11:31:51.316303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.612 [2024-10-07 11:31:51.316425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.612 [2024-10-07 11:31:51.316448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.612 [2024-10-07 11:31:51.316462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.612 [2024-10-07 11:31:51.316479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.612 [2024-10-07 11:31:51.316494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.612 [2024-10-07 11:31:51.316507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.612 [2024-10-07 11:31:51.316562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.612 [2024-10-07 11:31:51.316582] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.612 [2024-10-07 11:31:51.325741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.612 [2024-10-07 11:31:51.325817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.612 [2024-10-07 11:31:51.325897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.612 [2024-10-07 11:31:51.325926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.612 [2024-10-07 11:31:51.325942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.612 [2024-10-07 11:31:51.326005] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.612 [2024-10-07 11:31:51.326032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.612 [2024-10-07 11:31:51.326048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.612 [2024-10-07 11:31:51.326066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.612 [2024-10-07 11:31:51.326820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.612 [2024-10-07 11:31:51.326863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.612 [2024-10-07 11:31:51.326880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.612 [2024-10-07 11:31:51.326894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.612 [2024-10-07 11:31:51.327066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.612 [2024-10-07 11:31:51.327092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.612 [2024-10-07 11:31:51.327107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.612 [2024-10-07 11:31:51.327120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.612 [2024-10-07 11:31:51.327210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.612 [2024-10-07 11:31:51.336267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.612 [2024-10-07 11:31:51.336330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.612 [2024-10-07 11:31:51.336427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.612 [2024-10-07 11:31:51.336458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.613 [2024-10-07 11:31:51.336475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.613 [2024-10-07 11:31:51.336524] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.613 [2024-10-07 11:31:51.336547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.613 [2024-10-07 11:31:51.336562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.613 [2024-10-07 11:31:51.336594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.613 [2024-10-07 11:31:51.336617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.613 [2024-10-07 11:31:51.336668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.613 [2024-10-07 11:31:51.336688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.613 [2024-10-07 11:31:51.336702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.613 [2024-10-07 11:31:51.336718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.613 [2024-10-07 11:31:51.336732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.613 [2024-10-07 11:31:51.336745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.613 [2024-10-07 11:31:51.336775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.613 [2024-10-07 11:31:51.336792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.613 [2024-10-07 11:31:51.346803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.613 [2024-10-07 11:31:51.346853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.613 [2024-10-07 11:31:51.346946] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.613 [2024-10-07 11:31:51.346977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.613 [2024-10-07 11:31:51.346994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.613 [2024-10-07 11:31:51.347041] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.613 [2024-10-07 11:31:51.347064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.613 [2024-10-07 11:31:51.347079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.613 [2024-10-07 11:31:51.347110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.613 [2024-10-07 11:31:51.347133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.613 [2024-10-07 11:31:51.347160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.613 [2024-10-07 11:31:51.347177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.613 [2024-10-07 11:31:51.347192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.613 [2024-10-07 11:31:51.347208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.613 [2024-10-07 11:31:51.347222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.613 [2024-10-07 11:31:51.347235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.613 [2024-10-07 11:31:51.347264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.613 [2024-10-07 11:31:51.347281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.613 [2024-10-07 11:31:51.357073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.613 [2024-10-07 11:31:51.357123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.613 [2024-10-07 11:31:51.357232] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.613 [2024-10-07 11:31:51.357262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.613 [2024-10-07 11:31:51.357279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.613 [2024-10-07 11:31:51.357363] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.613 [2024-10-07 11:31:51.357390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.613 [2024-10-07 11:31:51.357406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.613 [2024-10-07 11:31:51.357660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.613 [2024-10-07 11:31:51.357692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.613 [2024-10-07 11:31:51.357828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.613 [2024-10-07 11:31:51.357854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.613 [2024-10-07 11:31:51.357869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.613 [2024-10-07 11:31:51.357886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.613 [2024-10-07 11:31:51.357900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.613 [2024-10-07 11:31:51.357914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.613 [2024-10-07 11:31:51.358019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.613 [2024-10-07 11:31:51.358040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.613 [2024-10-07 11:31:51.367225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.613 [2024-10-07 11:31:51.367315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.613 [2024-10-07 11:31:51.367408] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.613 [2024-10-07 11:31:51.367436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.613 [2024-10-07 11:31:51.367453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.613 [2024-10-07 11:31:51.367519] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.613 [2024-10-07 11:31:51.367546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.613 [2024-10-07 11:31:51.367562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.613 [2024-10-07 11:31:51.367580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.613 [2024-10-07 11:31:51.368306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.613 [2024-10-07 11:31:51.368360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.613 [2024-10-07 11:31:51.368378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.613 [2024-10-07 11:31:51.368392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.613 [2024-10-07 11:31:51.368565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.613 [2024-10-07 11:31:51.368591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.613 [2024-10-07 11:31:51.368605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.613 [2024-10-07 11:31:51.368619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.613 [2024-10-07 11:31:51.368729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.613 [2024-10-07 11:31:51.377851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.613 [2024-10-07 11:31:51.377912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.613 [2024-10-07 11:31:51.378017] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.613 [2024-10-07 11:31:51.378049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.613 [2024-10-07 11:31:51.378066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.613 [2024-10-07 11:31:51.378115] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.613 [2024-10-07 11:31:51.378138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.613 [2024-10-07 11:31:51.378154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.613 [2024-10-07 11:31:51.378187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.613 [2024-10-07 11:31:51.378211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.613 [2024-10-07 11:31:51.378239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.613 [2024-10-07 11:31:51.378257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.613 [2024-10-07 11:31:51.378271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.613 [2024-10-07 11:31:51.378299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.613 [2024-10-07 11:31:51.378337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.613 [2024-10-07 11:31:51.378353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.613 [2024-10-07 11:31:51.378386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.613 [2024-10-07 11:31:51.378404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.613 [2024-10-07 11:31:51.388485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.613 [2024-10-07 11:31:51.388537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.613 [2024-10-07 11:31:51.388633] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.613 [2024-10-07 11:31:51.388664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.613 [2024-10-07 11:31:51.388681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.613 [2024-10-07 11:31:51.388729] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.613 [2024-10-07 11:31:51.388752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.613 [2024-10-07 11:31:51.388768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.613 [2024-10-07 11:31:51.388801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.613 [2024-10-07 11:31:51.388825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.613 [2024-10-07 11:31:51.388851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.613 [2024-10-07 11:31:51.388870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.613 [2024-10-07 11:31:51.388907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.613 [2024-10-07 11:31:51.388925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.613 [2024-10-07 11:31:51.388940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.613 [2024-10-07 11:31:51.388953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.613 [2024-10-07 11:31:51.388984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.613 [2024-10-07 11:31:51.389002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.613 [2024-10-07 11:31:51.398877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.613 [2024-10-07 11:31:51.398928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.613 [2024-10-07 11:31:51.399026] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.613 [2024-10-07 11:31:51.399057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.613 [2024-10-07 11:31:51.399075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.613 [2024-10-07 11:31:51.399123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.613 [2024-10-07 11:31:51.399146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.613 [2024-10-07 11:31:51.399162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.613 [2024-10-07 11:31:51.399430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.613 [2024-10-07 11:31:51.399462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.613 [2024-10-07 11:31:51.399598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.613 [2024-10-07 11:31:51.399623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.613 [2024-10-07 11:31:51.399638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.613 [2024-10-07 11:31:51.399655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.613 [2024-10-07 11:31:51.399670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.613 [2024-10-07 11:31:51.399683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.613 [2024-10-07 11:31:51.399788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.614 [2024-10-07 11:31:51.399808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.614 [2024-10-07 11:31:51.409003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.614 [2024-10-07 11:31:51.409078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.614 [2024-10-07 11:31:51.409157] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.614 [2024-10-07 11:31:51.409186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.614 [2024-10-07 11:31:51.409202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.614 [2024-10-07 11:31:51.409266] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.614 [2024-10-07 11:31:51.409311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.614 [2024-10-07 11:31:51.409348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.614 [2024-10-07 11:31:51.409368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.614 [2024-10-07 11:31:51.410096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.614 [2024-10-07 11:31:51.410138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.614 [2024-10-07 11:31:51.410156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.614 [2024-10-07 11:31:51.410170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.614 [2024-10-07 11:31:51.410385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.614 [2024-10-07 11:31:51.410414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.614 [2024-10-07 11:31:51.410428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.614 [2024-10-07 11:31:51.410442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.614 [2024-10-07 11:31:51.410533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.614 [2024-10-07 11:31:51.419589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.614 [2024-10-07 11:31:51.419640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.614 [2024-10-07 11:31:51.419735] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.614 [2024-10-07 11:31:51.419766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.614 [2024-10-07 11:31:51.419783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.614 [2024-10-07 11:31:51.419831] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.614 [2024-10-07 11:31:51.419854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.614 [2024-10-07 11:31:51.419869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.614 [2024-10-07 11:31:51.419901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.614 [2024-10-07 11:31:51.419924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.614 [2024-10-07 11:31:51.419952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.614 [2024-10-07 11:31:51.419969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.614 [2024-10-07 11:31:51.419984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.614 [2024-10-07 11:31:51.420000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.614 [2024-10-07 11:31:51.420014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.614 [2024-10-07 11:31:51.420029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.614 [2024-10-07 11:31:51.420059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.614 [2024-10-07 11:31:51.420076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.614 [2024-10-07 11:31:51.430107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.614 [2024-10-07 11:31:51.430158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.614 [2024-10-07 11:31:51.430253] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.614 [2024-10-07 11:31:51.430298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.614 [2024-10-07 11:31:51.430332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.614 [2024-10-07 11:31:51.430387] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.614 [2024-10-07 11:31:51.430412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.614 [2024-10-07 11:31:51.430428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.614 [2024-10-07 11:31:51.430461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.614 [2024-10-07 11:31:51.430485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.614 [2024-10-07 11:31:51.430511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.614 [2024-10-07 11:31:51.430529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.614 [2024-10-07 11:31:51.430543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.614 [2024-10-07 11:31:51.430559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.614 [2024-10-07 11:31:51.430573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.614 [2024-10-07 11:31:51.430586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.614 [2024-10-07 11:31:51.430615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.614 [2024-10-07 11:31:51.430632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.614 [2024-10-07 11:31:51.440463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.614 [2024-10-07 11:31:51.440514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.614 [2024-10-07 11:31:51.440827] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.614 [2024-10-07 11:31:51.440870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.614 [2024-10-07 11:31:51.440889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.614 [2024-10-07 11:31:51.440940] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.614 [2024-10-07 11:31:51.440963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.614 [2024-10-07 11:31:51.440979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.614 [2024-10-07 11:31:51.441117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.614 [2024-10-07 11:31:51.441146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.614 [2024-10-07 11:31:51.441249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.614 [2024-10-07 11:31:51.441270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.614 [2024-10-07 11:31:51.441303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.614 [2024-10-07 11:31:51.441338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.614 [2024-10-07 11:31:51.441357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.614 [2024-10-07 11:31:51.441370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.614 [2024-10-07 11:31:51.441411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.614 [2024-10-07 11:31:51.441430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.614 [2024-10-07 11:31:51.450588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.614 [2024-10-07 11:31:51.450661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.614 [2024-10-07 11:31:51.450740] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.614 [2024-10-07 11:31:51.450769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.614 [2024-10-07 11:31:51.450785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.614 [2024-10-07 11:31:51.450849] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.614 [2024-10-07 11:31:51.450876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.614 [2024-10-07 11:31:51.450893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.614 [2024-10-07 11:31:51.450913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.614 [2024-10-07 11:31:51.451655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.614 [2024-10-07 11:31:51.451697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.614 [2024-10-07 11:31:51.451715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.614 [2024-10-07 11:31:51.451729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.614 [2024-10-07 11:31:51.451901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.614 [2024-10-07 11:31:51.451926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.614 [2024-10-07 11:31:51.451941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.614 [2024-10-07 11:31:51.451955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.614 [2024-10-07 11:31:51.452063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.614 [2024-10-07 11:31:51.461118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.614 [2024-10-07 11:31:51.461167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.614 [2024-10-07 11:31:51.461260] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.614 [2024-10-07 11:31:51.461291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.614 [2024-10-07 11:31:51.461308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.614 [2024-10-07 11:31:51.461376] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.614 [2024-10-07 11:31:51.461400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.614 [2024-10-07 11:31:51.461434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.614 [2024-10-07 11:31:51.461469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.614 [2024-10-07 11:31:51.461492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.614 [2024-10-07 11:31:51.461520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.614 [2024-10-07 11:31:51.461538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.614 [2024-10-07 11:31:51.461552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.614 [2024-10-07 11:31:51.461569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.614 [2024-10-07 11:31:51.461583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.614 [2024-10-07 11:31:51.461596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.614 [2024-10-07 11:31:51.461625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.614 [2024-10-07 11:31:51.461642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.614 [2024-10-07 11:31:51.471625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.614 [2024-10-07 11:31:51.471675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.614 [2024-10-07 11:31:51.471767] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.615 [2024-10-07 11:31:51.471798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.615 [2024-10-07 11:31:51.471815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.615 [2024-10-07 11:31:51.471863] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.615 [2024-10-07 11:31:51.471885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.615 [2024-10-07 11:31:51.471901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.615 [2024-10-07 11:31:51.471933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.615 [2024-10-07 11:31:51.471956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.615 [2024-10-07 11:31:51.471983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.615 [2024-10-07 11:31:51.472001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.615 [2024-10-07 11:31:51.472015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.615 [2024-10-07 11:31:51.472031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.615 [2024-10-07 11:31:51.472045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.615 [2024-10-07 11:31:51.472058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.615 [2024-10-07 11:31:51.472088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.615 [2024-10-07 11:31:51.472105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.615 [2024-10-07 11:31:51.481919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.615 [2024-10-07 11:31:51.481968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.615 [2024-10-07 11:31:51.482077] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.615 [2024-10-07 11:31:51.482108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.615 [2024-10-07 11:31:51.482126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.615 [2024-10-07 11:31:51.482174] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.615 [2024-10-07 11:31:51.482197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.615 [2024-10-07 11:31:51.482213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.615 [2024-10-07 11:31:51.482509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.615 [2024-10-07 11:31:51.482544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.615 [2024-10-07 11:31:51.482681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.615 [2024-10-07 11:31:51.482706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.615 [2024-10-07 11:31:51.482721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.615 [2024-10-07 11:31:51.482738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.615 [2024-10-07 11:31:51.482753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.615 [2024-10-07 11:31:51.482766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.615 [2024-10-07 11:31:51.482871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.615 [2024-10-07 11:31:51.482891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.615 [2024-10-07 11:31:51.492062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.615 [2024-10-07 11:31:51.492143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.615 [2024-10-07 11:31:51.492228] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.615 [2024-10-07 11:31:51.492257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.615 [2024-10-07 11:31:51.492275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.615 [2024-10-07 11:31:51.492355] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.615 [2024-10-07 11:31:51.492384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.615 [2024-10-07 11:31:51.492400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.615 [2024-10-07 11:31:51.492419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.615 [2024-10-07 11:31:51.493155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.615 [2024-10-07 11:31:51.493196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.615 [2024-10-07 11:31:51.493215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.615 [2024-10-07 11:31:51.493229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.615 [2024-10-07 11:31:51.493419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.615 [2024-10-07 11:31:51.493463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.615 [2024-10-07 11:31:51.493480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.615 [2024-10-07 11:31:51.493494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.615 [2024-10-07 11:31:51.493588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.615 8975.67 IOPS, 35.06 MiB/s [2024-10-07T11:31:53.138Z] [2024-10-07 11:31:51.502163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.615 [2024-10-07 11:31:51.502329] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.615 [2024-10-07 11:31:51.502363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.615 [2024-10-07 11:31:51.502382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.615 [2024-10-07 11:31:51.502431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.615 [2024-10-07 11:31:51.502472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.615 [2024-10-07 11:31:51.502503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.615 [2024-10-07 11:31:51.502521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.615 [2024-10-07 11:31:51.502535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.615 [2024-10-07 11:31:51.502564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.615 [2024-10-07 11:31:51.502622] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.615 [2024-10-07 11:31:51.502648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.615 [2024-10-07 11:31:51.502664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.615 [2024-10-07 11:31:51.502695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.615 [2024-10-07 11:31:51.502727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.615 [2024-10-07 11:31:51.502744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.615 [2024-10-07 11:31:51.502759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.615 [2024-10-07 11:31:51.502791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.615 00:20:57.615 Latency(us) 00:20:57.615 [2024-10-07T11:31:53.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.615 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:57.615 Verification LBA range: start 0x0 length 0x4000 00:20:57.615 NVMe0n1 : 15.01 8976.11 35.06 0.00 0.00 14227.40 1459.67 17992.61 00:20:57.615 [2024-10-07T11:31:53.138Z] =================================================================================================================== 00:20:57.615 [2024-10-07T11:31:53.138Z] Total : 8976.11 35.06 0.00 0.00 14227.40 1459.67 17992.61 00:20:57.615 [2024-10-07 11:31:51.512243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.615 [2024-10-07 11:31:51.512379] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.615 [2024-10-07 11:31:51.512412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.615 [2024-10-07 11:31:51.512453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.615 [2024-10-07 11:31:51.512495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.615 [2024-10-07 11:31:51.512520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.615 [2024-10-07 11:31:51.512536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.615 [2024-10-07 11:31:51.512550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.615 [2024-10-07 11:31:51.512574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.615 [2024-10-07 11:31:51.512594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.615 [2024-10-07 11:31:51.512662] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.615 [2024-10-07 11:31:51.512689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.615 [2024-10-07 11:31:51.512706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.615 [2024-10-07 11:31:51.512726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.615 [2024-10-07 11:31:51.512752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.615 [2024-10-07 11:31:51.512766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.615 [2024-10-07 11:31:51.512780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.615 [2024-10-07 11:31:51.512797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.615 [2024-10-07 11:31:51.522332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.615 [2024-10-07 11:31:51.522423] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.615 [2024-10-07 11:31:51.522452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.615 [2024-10-07 11:31:51.522469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.615 [2024-10-07 11:31:51.522489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.615 [2024-10-07 11:31:51.522508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.615 [2024-10-07 11:31:51.522523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.615 [2024-10-07 11:31:51.522536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.615 [2024-10-07 11:31:51.522553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.615 [2024-10-07 11:31:51.522632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.615 [2024-10-07 11:31:51.522695] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.615 [2024-10-07 11:31:51.522721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.615 [2024-10-07 11:31:51.522737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.615 [2024-10-07 11:31:51.522756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.615 [2024-10-07 11:31:51.522775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.615 [2024-10-07 11:31:51.522803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.615 [2024-10-07 11:31:51.522818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.615 [2024-10-07 11:31:51.522835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.615 [2024-10-07 11:31:51.532392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.615 [2024-10-07 11:31:51.532482] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.615 [2024-10-07 11:31:51.532510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.615 [2024-10-07 11:31:51.532527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.615 [2024-10-07 11:31:51.532547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.615 [2024-10-07 11:31:51.532566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.615 [2024-10-07 11:31:51.532581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.615 [2024-10-07 11:31:51.532595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.615 [2024-10-07 11:31:51.532611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.616 [2024-10-07 11:31:51.532670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.616 [2024-10-07 11:31:51.532734] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.616 [2024-10-07 11:31:51.532760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.616 [2024-10-07 11:31:51.532776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.616 [2024-10-07 11:31:51.532795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.616 [2024-10-07 11:31:51.532814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.616 [2024-10-07 11:31:51.532828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.616 [2024-10-07 11:31:51.532841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.616 [2024-10-07 11:31:51.532858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.616 [2024-10-07 11:31:51.542450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.616 [2024-10-07 11:31:51.542537] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.616 [2024-10-07 11:31:51.542565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.616 [2024-10-07 11:31:51.542581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.616 [2024-10-07 11:31:51.542601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.616 [2024-10-07 11:31:51.542620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.616 [2024-10-07 11:31:51.542635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.616 [2024-10-07 11:31:51.542648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.616 [2024-10-07 11:31:51.542665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.616 [2024-10-07 11:31:51.542706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.616 [2024-10-07 11:31:51.542784] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.616 [2024-10-07 11:31:51.542810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.616 [2024-10-07 11:31:51.542826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.616 [2024-10-07 11:31:51.542845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.616 [2024-10-07 11:31:51.542864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.616 [2024-10-07 11:31:51.542879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.616 [2024-10-07 11:31:51.542892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.616 [2024-10-07 11:31:51.542909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.616 [2024-10-07 11:31:51.552507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.616 [2024-10-07 11:31:51.552593] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.616 [2024-10-07 11:31:51.552621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20280 with addr=10.0.0.3, port=4421 00:20:57.616 [2024-10-07 11:31:51.552637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20280 is same with the state(6) to be set 00:20:57.616 [2024-10-07 11:31:51.552657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20280 (9): Bad file descriptor 00:20:57.616 [2024-10-07 11:31:51.552676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.616 [2024-10-07 11:31:51.552690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.616 [2024-10-07 11:31:51.552704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.616 [2024-10-07 11:31:51.552720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.616 [2024-10-07 11:31:51.552755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.616 [2024-10-07 11:31:51.552819] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.616 [2024-10-07 11:31:51.552845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe189a0 with addr=10.0.0.3, port=4422 00:20:57.616 [2024-10-07 11:31:51.552861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe189a0 is same with the state(6) to be set 00:20:57.616 [2024-10-07 11:31:51.552880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe189a0 (9): Bad file descriptor 00:20:57.616 [2024-10-07 11:31:51.552899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.616 [2024-10-07 11:31:51.552913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.616 [2024-10-07 11:31:51.552926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.616 [2024-10-07 11:31:51.552942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.616 Received shutdown signal, test time was about 15.000000 seconds 00:20:57.616 00:20:57.616 Latency(us) 00:20:57.616 [2024-10-07T11:31:53.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.616 [2024-10-07T11:31:53.139Z] =================================================================================================================== 00:20:57.616 [2024-10-07T11:31:53.139Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:57.616 Process with pid 75490 is not found 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # killprocess 75490 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75490 ']' 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75490 00:20:57.616 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (75490) - No such process 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@977 -- # echo 'Process with pid 75490 is not found' 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # nvmftestfini 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:57.616 rmmod nvme_tcp 00:20:57.616 rmmod nvme_fabrics 00:20:57.616 rmmod nvme_keyring 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 75427 ']' 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 75427 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75427 ']' 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75427 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75427 00:20:57.616 killing process with pid 75427 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75427' 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75427 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75427 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:57.616 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:57.875 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:57.875 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.875 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # exit 1 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # trap - ERR 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # print_backtrace 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh' 'nvmf_failover' '--transport=tcp') 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # local args 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1157 -- # xtrace_disable 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:57.876 ========== Backtrace start: ========== 00:20:57.876 00:20:57.876 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_failover"],["/home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh"],["--transport=tcp"]) 00:20:57.876 ... 00:20:57.876 1120 timing_enter $test_name 00:20:57.876 1121 echo "************************************" 00:20:57.876 1122 echo "START TEST $test_name" 00:20:57.876 1123 echo "************************************" 00:20:57.876 1124 xtrace_restore 00:20:57.876 1125 time "$@" 00:20:57.876 1126 xtrace_disable 00:20:57.876 1127 echo "************************************" 00:20:57.876 1128 echo "END TEST $test_name" 00:20:57.876 1129 echo "************************************" 00:20:57.876 1130 timing_exit $test_name 00:20:57.876 ... 00:20:57.876 in /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh:25 -> main(["--transport=tcp"]) 00:20:57.876 ... 00:20:57.876 20 fi 00:20:57.876 21 00:20:57.876 22 run_test "nvmf_identify" $rootdir/test/nvmf/host/identify.sh "${TEST_ARGS[@]}" 00:20:57.876 23 run_test "nvmf_perf" $rootdir/test/nvmf/host/perf.sh "${TEST_ARGS[@]}" 00:20:57.876 24 run_test "nvmf_fio_host" $rootdir/test/nvmf/host/fio.sh "${TEST_ARGS[@]}" 00:20:57.876 => 25 run_test "nvmf_failover" $rootdir/test/nvmf/host/failover.sh "${TEST_ARGS[@]}" 00:20:57.876 26 run_test "nvmf_host_discovery" $rootdir/test/nvmf/host/discovery.sh "${TEST_ARGS[@]}" 00:20:57.876 27 run_test "nvmf_host_multipath_status" $rootdir/test/nvmf/host/multipath_status.sh "${TEST_ARGS[@]}" 00:20:57.876 28 run_test "nvmf_discovery_remove_ifc" $rootdir/test/nvmf/host/discovery_remove_ifc.sh "${TEST_ARGS[@]}" 00:20:57.876 29 run_test "nvmf_identify_kernel_target" "$rootdir/test/nvmf/host/identify_kernel_nvmf.sh" "${TEST_ARGS[@]}" 00:20:57.876 30 run_test "nvmf_auth_host" "$rootdir/test/nvmf/host/auth.sh" "${TEST_ARGS[@]}" 00:20:57.876 ... 00:20:57.876 00:20:57.876 ========== Backtrace end ========== 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1194 -- # return 0 00:20:57.876 00:20:57.876 real 0m22.115s 00:20:57.876 user 1m20.112s 00:20:57.876 sys 0m5.096s 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1 -- # exit 1 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # trap - ERR 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # print_backtrace 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh' 'nvmf_host' '--transport=tcp') 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1155 -- # local args 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1157 -- # xtrace_disable 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.876 ========== Backtrace start: ========== 00:20:57.876 00:20:57.876 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_host"],["/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh"],["--transport=tcp"]) 00:20:57.876 ... 00:20:57.876 1120 timing_enter $test_name 00:20:57.876 1121 echo "************************************" 00:20:57.876 1122 echo "START TEST $test_name" 00:20:57.876 1123 echo "************************************" 00:20:57.876 1124 xtrace_restore 00:20:57.876 1125 time "$@" 00:20:57.876 1126 xtrace_disable 00:20:57.876 1127 echo "************************************" 00:20:57.876 1128 echo "END TEST $test_name" 00:20:57.876 1129 echo "************************************" 00:20:57.876 1130 timing_exit $test_name 00:20:57.876 ... 00:20:57.876 in /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh:16 -> main(["--transport=tcp"]) 00:20:57.876 ... 00:20:57.876 11 exit 0 00:20:57.876 12 fi 00:20:57.876 13 00:20:57.876 14 run_test "nvmf_target_core" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:57.876 15 run_test "nvmf_target_extra" $rootdir/test/nvmf/nvmf_target_extra.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:57.876 => 16 run_test "nvmf_host" $rootdir/test/nvmf/nvmf_host.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:57.876 17 00:20:57.876 18 # Interrupt mode for now is supported only on the target, with the TCP transport and posix or ssl socket implementations. 00:20:57.876 19 if [[ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" && $SPDK_TEST_URING -eq 0 ]]; then 00:20:57.876 20 run_test "nvmf_target_core_interrupt_mode" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT --interrupt-mode 00:20:57.876 21 run_test "nvmf_interrupt" $rootdir/test/nvmf/target/interrupt.sh --transport=$SPDK_TEST_NVMF_TRANSPORT --interrupt-mode 00:20:57.876 ... 00:20:57.876 00:20:57.876 ========== Backtrace end ========== 00:20:57.876 11:31:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1194 -- # return 0 00:20:57.876 00:20:57.876 real 0m49.641s 00:20:57.876 user 2m58.113s 00:20:57.876 sys 0m12.730s 00:20:57.876 11:31:52 nvmf_tcp -- common/autotest_common.sh@1125 -- # trap - ERR 00:20:57.876 11:31:52 nvmf_tcp -- common/autotest_common.sh@1125 -- # print_backtrace 00:20:57.876 11:31:52 nvmf_tcp -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:20:57.876 11:31:52 nvmf_tcp -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh' 'nvmf_tcp' '/home/vagrant/spdk_repo/autorun-spdk.conf') 00:20:57.876 11:31:52 nvmf_tcp -- common/autotest_common.sh@1155 -- # local args 00:20:57.876 11:31:52 nvmf_tcp -- common/autotest_common.sh@1157 -- # xtrace_disable 00:20:57.876 11:31:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:57.876 ========== Backtrace start: ========== 00:20:57.876 00:20:57.876 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_tcp"],["/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh"],["--transport=tcp"]) 00:20:57.876 ... 00:20:57.876 1120 timing_enter $test_name 00:20:57.876 1121 echo "************************************" 00:20:57.876 1122 echo "START TEST $test_name" 00:20:57.876 1123 echo "************************************" 00:20:57.876 1124 xtrace_restore 00:20:57.876 1125 time "$@" 00:20:57.876 1126 xtrace_disable 00:20:57.876 1127 echo "************************************" 00:20:57.876 1128 echo "END TEST $test_name" 00:20:57.876 1129 echo "************************************" 00:20:57.876 1130 timing_exit $test_name 00:20:57.876 ... 00:20:57.876 in /home/vagrant/spdk_repo/spdk/autotest.sh:280 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:20:57.876 ... 00:20:57.876 275 # list of all tests can properly differentiate them. Please do not merge them into one line. 00:20:57.876 276 if [ "$SPDK_TEST_NVMF_TRANSPORT" = "rdma" ]; then 00:20:57.876 277 run_test "nvmf_rdma" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:57.876 278 run_test "spdkcli_nvmf_rdma" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:57.876 279 elif [ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" ]; then 00:20:57.876 => 280 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:57.876 281 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:20:57.876 282 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:57.876 283 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:57.876 284 fi 00:20:57.876 285 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:20:57.876 ... 00:20:57.876 00:20:57.876 ========== Backtrace end ========== 00:20:57.876 11:31:52 nvmf_tcp -- common/autotest_common.sh@1194 -- # return 0 00:20:57.876 00:20:57.876 real 8m51.828s 00:20:57.876 user 21m19.620s 00:20:57.876 sys 2m13.785s 00:20:57.876 11:31:52 nvmf_tcp -- common/autotest_common.sh@1 -- # autotest_cleanup 00:20:57.876 11:31:52 nvmf_tcp -- common/autotest_common.sh@1392 -- # local autotest_es=1 00:20:57.876 11:31:52 nvmf_tcp -- common/autotest_common.sh@1393 -- # xtrace_disable 00:20:57.876 11:31:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:10.090 INFO: APP EXITING 00:21:10.090 INFO: killing all VMs 00:21:10.090 INFO: killing vhost app 00:21:10.090 INFO: EXIT DONE 00:21:10.090 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:10.090 Waiting for block devices as requested 00:21:10.090 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:10.090 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:10.348 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:10.348 Cleaning 00:21:10.348 Removing: /var/run/dpdk/spdk0/config 00:21:10.606 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:10.606 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:10.606 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:10.606 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:10.606 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:10.606 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:10.606 Removing: /var/run/dpdk/spdk1/config 00:21:10.606 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:10.606 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:10.606 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:10.606 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:10.606 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:10.606 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:10.606 Removing: /var/run/dpdk/spdk2/config 00:21:10.606 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:10.606 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:10.606 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:10.606 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:10.606 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:10.606 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:10.606 Removing: /var/run/dpdk/spdk3/config 00:21:10.606 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:10.606 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:10.606 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:10.606 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:10.606 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:10.606 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:10.606 Removing: /var/run/dpdk/spdk4/config 00:21:10.606 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:10.606 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:10.606 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:10.606 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:10.606 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:10.606 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:10.606 Removing: /dev/shm/nvmf_trace.0 00:21:10.606 Removing: /dev/shm/spdk_tgt_trace.pid56732 00:21:10.606 Removing: /var/run/dpdk/spdk0 00:21:10.606 Removing: /var/run/dpdk/spdk1 00:21:10.606 Removing: /var/run/dpdk/spdk2 00:21:10.606 Removing: /var/run/dpdk/spdk3 00:21:10.606 Removing: /var/run/dpdk/spdk4 00:21:10.606 Removing: /var/run/dpdk/spdk_pid56573 00:21:10.606 Removing: /var/run/dpdk/spdk_pid56732 00:21:10.606 Removing: /var/run/dpdk/spdk_pid56938 00:21:10.606 Removing: /var/run/dpdk/spdk_pid57023 00:21:10.606 Removing: /var/run/dpdk/spdk_pid57052 00:21:10.606 Removing: /var/run/dpdk/spdk_pid57161 00:21:10.606 Removing: /var/run/dpdk/spdk_pid57179 00:21:10.606 Removing: /var/run/dpdk/spdk_pid57319 00:21:10.606 Removing: /var/run/dpdk/spdk_pid57514 00:21:10.606 Removing: /var/run/dpdk/spdk_pid57668 00:21:10.606 Removing: /var/run/dpdk/spdk_pid57746 00:21:10.606 Removing: /var/run/dpdk/spdk_pid57830 00:21:10.606 Removing: /var/run/dpdk/spdk_pid57929 00:21:10.606 Removing: /var/run/dpdk/spdk_pid58014 00:21:10.607 Removing: /var/run/dpdk/spdk_pid58053 00:21:10.607 Removing: /var/run/dpdk/spdk_pid58083 00:21:10.607 Removing: /var/run/dpdk/spdk_pid58158 00:21:10.607 Removing: /var/run/dpdk/spdk_pid58250 00:21:10.607 Removing: /var/run/dpdk/spdk_pid58707 00:21:10.607 Removing: /var/run/dpdk/spdk_pid58759 00:21:10.607 Removing: /var/run/dpdk/spdk_pid58810 00:21:10.607 Removing: /var/run/dpdk/spdk_pid58826 00:21:10.607 Removing: /var/run/dpdk/spdk_pid58893 00:21:10.607 Removing: /var/run/dpdk/spdk_pid58915 00:21:10.607 Removing: /var/run/dpdk/spdk_pid58982 00:21:10.607 Removing: /var/run/dpdk/spdk_pid58998 00:21:10.607 Removing: /var/run/dpdk/spdk_pid59049 00:21:10.607 Removing: /var/run/dpdk/spdk_pid59059 00:21:10.607 Removing: /var/run/dpdk/spdk_pid59105 00:21:10.607 Removing: /var/run/dpdk/spdk_pid59123 00:21:10.607 Removing: /var/run/dpdk/spdk_pid59259 00:21:10.607 Removing: /var/run/dpdk/spdk_pid59289 00:21:10.607 Removing: /var/run/dpdk/spdk_pid59377 00:21:10.607 Removing: /var/run/dpdk/spdk_pid59711 00:21:10.607 Removing: /var/run/dpdk/spdk_pid59723 00:21:10.607 Removing: /var/run/dpdk/spdk_pid59765 00:21:10.607 Removing: /var/run/dpdk/spdk_pid59773 00:21:10.607 Removing: /var/run/dpdk/spdk_pid59794 00:21:10.607 Removing: /var/run/dpdk/spdk_pid59813 00:21:10.607 Removing: /var/run/dpdk/spdk_pid59832 00:21:10.607 Removing: /var/run/dpdk/spdk_pid59853 00:21:10.607 Removing: /var/run/dpdk/spdk_pid59872 00:21:10.607 Removing: /var/run/dpdk/spdk_pid59887 00:21:10.607 Removing: /var/run/dpdk/spdk_pid59901 00:21:10.865 Removing: /var/run/dpdk/spdk_pid59928 00:21:10.865 Removing: /var/run/dpdk/spdk_pid59941 00:21:10.865 Removing: /var/run/dpdk/spdk_pid59962 00:21:10.865 Removing: /var/run/dpdk/spdk_pid59981 00:21:10.865 Removing: /var/run/dpdk/spdk_pid60001 00:21:10.865 Removing: /var/run/dpdk/spdk_pid60017 00:21:10.865 Removing: /var/run/dpdk/spdk_pid60036 00:21:10.865 Removing: /var/run/dpdk/spdk_pid60049 00:21:10.865 Removing: /var/run/dpdk/spdk_pid60070 00:21:10.865 Removing: /var/run/dpdk/spdk_pid60103 00:21:10.865 Removing: /var/run/dpdk/spdk_pid60122 00:21:10.865 Removing: /var/run/dpdk/spdk_pid60151 00:21:10.865 Removing: /var/run/dpdk/spdk_pid60223 00:21:10.865 Removing: /var/run/dpdk/spdk_pid60252 00:21:10.865 Removing: /var/run/dpdk/spdk_pid60267 00:21:10.865 Removing: /var/run/dpdk/spdk_pid60295 00:21:10.865 Removing: /var/run/dpdk/spdk_pid60305 00:21:10.865 Removing: /var/run/dpdk/spdk_pid60318 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60360 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60374 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60408 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60416 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60427 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60436 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60446 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60461 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60470 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60480 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60514 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60540 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60550 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60584 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60593 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60601 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60647 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60653 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60685 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60698 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60700 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60713 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60726 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60728 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60741 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60749 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60825 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60878 00:21:10.866 Removing: /var/run/dpdk/spdk_pid60998 00:21:10.866 Removing: /var/run/dpdk/spdk_pid61039 00:21:10.866 Removing: /var/run/dpdk/spdk_pid61077 00:21:10.866 Removing: /var/run/dpdk/spdk_pid61097 00:21:10.866 Removing: /var/run/dpdk/spdk_pid61113 00:21:10.866 Removing: /var/run/dpdk/spdk_pid61133 00:21:10.866 Removing: /var/run/dpdk/spdk_pid61170 00:21:10.866 Removing: /var/run/dpdk/spdk_pid61186 00:21:10.866 Removing: /var/run/dpdk/spdk_pid61264 00:21:10.866 Removing: /var/run/dpdk/spdk_pid61285 00:21:10.866 Removing: /var/run/dpdk/spdk_pid61335 00:21:10.866 Removing: /var/run/dpdk/spdk_pid61403 00:21:10.866 Removing: /var/run/dpdk/spdk_pid61475 00:21:10.866 Removing: /var/run/dpdk/spdk_pid61503 00:21:10.866 Removing: /var/run/dpdk/spdk_pid61599 00:21:10.866 Removing: /var/run/dpdk/spdk_pid61646 00:21:10.866 Removing: /var/run/dpdk/spdk_pid61684 00:21:10.866 Removing: /var/run/dpdk/spdk_pid61916 00:21:10.866 Removing: /var/run/dpdk/spdk_pid62008 00:21:10.866 Removing: /var/run/dpdk/spdk_pid62042 00:21:10.866 Removing: /var/run/dpdk/spdk_pid62072 00:21:10.866 Removing: /var/run/dpdk/spdk_pid62105 00:21:10.866 Removing: /var/run/dpdk/spdk_pid62144 00:21:10.866 Removing: /var/run/dpdk/spdk_pid62178 00:21:10.866 Removing: /var/run/dpdk/spdk_pid62209 00:21:10.866 Removing: /var/run/dpdk/spdk_pid62617 00:21:10.866 Removing: /var/run/dpdk/spdk_pid62655 00:21:10.866 Removing: /var/run/dpdk/spdk_pid63005 00:21:10.866 Removing: /var/run/dpdk/spdk_pid63486 00:21:10.866 Removing: /var/run/dpdk/spdk_pid63764 00:21:10.866 Removing: /var/run/dpdk/spdk_pid64673 00:21:10.866 Removing: /var/run/dpdk/spdk_pid65603 00:21:10.866 Removing: /var/run/dpdk/spdk_pid65720 00:21:10.866 Removing: /var/run/dpdk/spdk_pid65793 00:21:10.866 Removing: /var/run/dpdk/spdk_pid67226 00:21:10.866 Removing: /var/run/dpdk/spdk_pid67542 00:21:10.866 Removing: /var/run/dpdk/spdk_pid71378 00:21:11.138 Removing: /var/run/dpdk/spdk_pid71755 00:21:11.138 Removing: /var/run/dpdk/spdk_pid71866 00:21:11.138 Removing: /var/run/dpdk/spdk_pid72001 00:21:11.138 Removing: /var/run/dpdk/spdk_pid72022 00:21:11.138 Removing: /var/run/dpdk/spdk_pid72056 00:21:11.138 Removing: /var/run/dpdk/spdk_pid72090 00:21:11.138 Removing: /var/run/dpdk/spdk_pid72188 00:21:11.138 Removing: /var/run/dpdk/spdk_pid72324 00:21:11.138 Removing: /var/run/dpdk/spdk_pid72493 00:21:11.138 Removing: /var/run/dpdk/spdk_pid72580 00:21:11.138 Removing: /var/run/dpdk/spdk_pid72787 00:21:11.138 Removing: /var/run/dpdk/spdk_pid72870 00:21:11.138 Removing: /var/run/dpdk/spdk_pid72963 00:21:11.138 Removing: /var/run/dpdk/spdk_pid73332 00:21:11.138 Removing: /var/run/dpdk/spdk_pid73744 00:21:11.138 Removing: /var/run/dpdk/spdk_pid73745 00:21:11.138 Removing: /var/run/dpdk/spdk_pid73746 00:21:11.138 Removing: /var/run/dpdk/spdk_pid74015 00:21:11.138 Removing: /var/run/dpdk/spdk_pid74341 00:21:11.138 Removing: /var/run/dpdk/spdk_pid74343 00:21:11.138 Removing: /var/run/dpdk/spdk_pid74678 00:21:11.138 Removing: /var/run/dpdk/spdk_pid74693 00:21:11.138 Removing: /var/run/dpdk/spdk_pid74707 00:21:11.138 Removing: /var/run/dpdk/spdk_pid74740 00:21:11.138 Removing: /var/run/dpdk/spdk_pid74745 00:21:11.138 Removing: /var/run/dpdk/spdk_pid75103 00:21:11.138 Removing: /var/run/dpdk/spdk_pid75156 00:21:11.138 Removing: /var/run/dpdk/spdk_pid75490 00:21:11.138 Clean 00:21:17.715 11:32:12 nvmf_tcp -- common/autotest_common.sh@1451 -- # return 1 00:21:17.715 11:32:12 nvmf_tcp -- common/autotest_common.sh@1 -- # : 00:21:17.715 11:32:12 nvmf_tcp -- common/autotest_common.sh@1 -- # exit 1 00:21:17.726 [Pipeline] } 00:21:17.745 [Pipeline] // timeout 00:21:17.754 [Pipeline] } 00:21:17.770 [Pipeline] // stage 00:21:17.776 [Pipeline] } 00:21:17.780 ERROR: script returned exit code 1 00:21:17.780 Setting overall build result to FAILURE 00:21:17.793 [Pipeline] // catchError 00:21:17.801 [Pipeline] stage 00:21:17.803 [Pipeline] { (Stop VM) 00:21:17.815 [Pipeline] sh 00:21:18.094 + vagrant halt 00:21:22.280 ==> default: Halting domain... 00:21:27.554 [Pipeline] sh 00:21:27.836 + vagrant destroy -f 00:21:32.024 ==> default: Removing domain... 00:21:32.035 [Pipeline] sh 00:21:32.312 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/output 00:21:32.319 [Pipeline] } 00:21:32.333 [Pipeline] // stage 00:21:32.338 [Pipeline] } 00:21:32.352 [Pipeline] // dir 00:21:32.357 [Pipeline] } 00:21:32.369 [Pipeline] // wrap 00:21:32.375 [Pipeline] } 00:21:32.387 [Pipeline] // catchError 00:21:32.395 [Pipeline] stage 00:21:32.397 [Pipeline] { (Epilogue) 00:21:32.410 [Pipeline] sh 00:21:32.729 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:35.338 [Pipeline] catchError 00:21:35.340 [Pipeline] { 00:21:35.353 [Pipeline] sh 00:21:35.635 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:35.635 Artifacts sizes are good 00:21:35.642 [Pipeline] } 00:21:35.656 [Pipeline] // catchError 00:21:35.666 [Pipeline] archiveArtifacts 00:21:35.672 Archiving artifacts 00:21:35.873 [Pipeline] cleanWs 00:21:35.884 [WS-CLEANUP] Deleting project workspace... 00:21:35.884 [WS-CLEANUP] Deferred wipeout is used... 00:21:35.890 [WS-CLEANUP] done 00:21:35.892 [Pipeline] } 00:21:35.907 [Pipeline] // stage 00:21:35.912 [Pipeline] } 00:21:35.927 [Pipeline] // node 00:21:35.933 [Pipeline] End of Pipeline 00:21:35.971 Finished: FAILURE